
A fully‑typed TypeScript SDK for the SightEngine content‑moderation API, with streaming support, URL/file/base64 inputs, video moderation, feedback endpoint, threshold presets, and helper utilities.🧬
- ✅ Fully‑typed (TypeScript)
- 📦 Supports both ESM and CommonJS
- 📡 Stream and Buffer‑based image moderation
- 🌐 URL, 📁 file‑path, and 🗃️ base64 image inputs
- 🎥 Video moderation: short‑sync & async with callback
- 📝 Feedback endpoint for misclassification reporting
- ⚙️ Threshold presets + custom thresholds
- 🛠️ Helper utilities:
isNSFW()
,listFlaggedCategories()
- 🔄 Retries, timeouts, and custom HTTP agent support
- 🧪 Vitest‑powered tests
npm install sightenginejs
# or
yarn add sightenginejs
import { SightEngineClient, PRESET_THRESHOLDS, isNSFW } from 'sightenginejs';
const client = new SightEngineClient({
apiUser: 'YOUR_USER',
apiSecret: 'YOUR_SECRET',
models: ['nudity-2.1', ...],
// optional overrides:
// timeout: 10000,
// retries: 3,
// httpAgent: new https.Agent({ keepAlive: true })
});
async function moderateImage() {
// Buffer-based
const buffer = fs.readFileSync('path/to/image.jpg');
const resp = await client.moderate(buffer, 'image.jpg');
// URL-based
// const resp = await client.moderateUrl('https://example.com/image.jpg');
// Check for NSFW content
if (isNSFW(resp, PRESET_THRESHOLDS.strict)) {
console.log('⚠️ Content flagged as NSFW:', listFlaggedCategories(resp));
} else {
console.log('✅ Content is safe');
}
}
opts.apiUser
(string, required) – Your SightEngine API user.opts.apiSecret
(string, required) – Your SightEngine API secret.opts.models
(string[]) – List of models to run.opts.timeout
(number) – Request timeout in ms (default: 15000).opts.retries
(number) – Number of retry attempts on 5xx errors (default: 2).opts.httpAgent
(Agent) – Custom HTTP/HTTPS agent for connection pooling.
Throws if apiUser
or apiSecret
are missing or empty.
-
moderate(buffer: Buffer, filename?: string)
Send aBuffer
orUint8Array
. -
moderateStream(input: Readable)
→Readable
Stream‑based input; emits oneModerationResponse
object. -
moderateUrl(url: string)
Moderation by public image URL. -
moderateFile(filePath: string)
Moderation by local file path. -
moderateBase64(b64: string, filename?: string)
Moderation by base64‑encoded string.
feedback(model: string, clazz: string, source: string | Buffer | Readable)
Report a misclassification. Posts to/1.0/feedback.json
.
import { DEFAULT_THRESHOLDS, PRESET_THRESHOLDS, isNSFW, listFlaggedCategories } from 'sightenginejs';
const resp = await client.moderate(buffer);
console.log(listFlaggedCategories(resp, PRESET_THRESHOLDS.moderate));
console.log(isNSFW(resp, DEFAULT_THRESHOLDS));
DEFAULT_THRESHOLDS
– sensible defaults.PRESET_THRESHOLDS
–strict
,moderate
,lenient
profiles.
npm run test
# generates coverage reports in coverage/
We use Vitest and nock to stub HTTP calls and ensure high coverage.
- Fork the repo
- Create a feature branch (
git checkout -b feat/...
) - Commit with Commitizen (
npm run commit
) - Push & open a PR against
develop
- Tests must pass, coverage ≥ 90%
MIT © Ali Nazari
Built with ❤️ by Ali Nazari, for developers. Happy encoding! 🎬