Image and text classification via Transformers (e.g., NSFW detection, sentiment/toxicity analysis). More tasks can be added over time.
No. CPU is fine for small models. If your framework has CUDA installed, it will automatically use GPU acceleration.
If something is missing (e.g., torch, transformers, Pillow), Moderators can auto-install via uv or pip unless you disable it.
To disable auto-installation:
export MODERATORS_DISABLE_AUTO_INSTALL=1For manual dependency control:
pip install "moderators[transformers]"Yes. Use --local-files-only in the CLI or local_files_only=True in Python after you have the model cached.
CLI:
moderators model-id input.jpg --local-files-onlyPython:
moderator = AutoModerator.from_pretrained("model-id", local_files_only=True)Regardless of the underlying pipeline, you always get the same result schema (PredictionResult with classifications/detections/raw_output), so your application code stays simple and consistent across different models.
Yes! As long as your model has a config.json file and is compatible with Transformers, you can use it with Moderators. Just point to the model directory or Hugging Face model ID.
Check out the GitHub repository to open issues or submit pull requests. Feature requests and contributions are welcome!