Content Security
Release Time 2025-03-03
Scene Description
The AI gateway can protect data security when interacting with large models. On one hand, it protects the privacy of data input to external models, and on the other hand, it filters the data output to users.
The AI gateway can process API request and response data for encryption, desensitization, etc., ensuring the security of data during transmission and storage. Handling these data protection tasks at the large model service layer may increase the complexity and computational burden of the large model. Processing them uniformly at the gateway better protects user-sensitive information while avoiding the security risks associated with direct exposure of sensitive data to large models. Additionally, through plugins related to content security, harmful or inappropriate content is filtered out, requests containing sensitive data are detected and blocked, and the quality and compliance of AI-generated content are reviewed.
Deploy Higress AI Gateway
This guide is based on Docker deployment. If you need other deployment methods (such as k8s, helm, etc.), please refer to Quick Start。
Execute the following command:
curl -sS https://higress.cn/ai-gateway/install.sh | bash
Follow the prompts to enter the Aliyun Dashscope or other API-KEY; you can also press Enter to skip and modify it later in the console. You can also press Enter
to skip and modify it later in the console.
The default HTTP service port is 8080, the HTTPS service port is 8443, and the console service port is 8001. If you need to use other ports, download the deployment script using wget https://higress.cn/ai-gateway/install.sh
, modify DEFAULT_GATEWAY_HTTP_PORT/DEFAULT_GATEWAY_HTTPS_PORT/DEFAULT_CONSOLE_PORT, and then execute the script using bash.
After the deployment is completed, the following command display will appear.
Console Configuration
Access the Higress console via a browser at http://localhost:8001/. The first login requires setting up an administrator account and password.
In the LLM Provider Management
, you can configure the API-KEYs for integrated suppliers. Currently integrated suppliers include Alibaba Cloud, DeepSeek, Azure OpenAI, OpenAI, DouBao, etc. Here we configure multi-model proxies for Tongyi Qwen, which can be ignored if already configured in the previous step.
Configure Service Source
Higress calls the content safety service through a service method. Taking Alibaba Cloud Content Safety as an example, corresponding services and permissions need to be activated in Alibaba Cloud Content Safety: https://www.alibabacloud.com/help/en/content-moderation/latest/access-guide。
Create a service source in the console’s Service Sources
.
Fill in the corresponding fields in the Service Sources
:
- Type: Domains
- Service Port: 443
- Domains: Fill in the domain name corresponding to the above document
- Service Protocol: HTTPS
- SNI: Same as the domains
Configure AI Route Strategy
In the AI Route Config
, configure strategy for aliyun and select AI Safety Guard
.
In the AI Safety Guard
, fill in the following fields as a reference:
serviceName: aliyun-safety.dns # Created service FQDNservicePort: 443serviceHost: green-cip.cn-hangzhou.aliyuncs.com # Service domain in previous stepaccessKey: "XXXXXXXXX" # Alibaba Cloud user corresponding AccessKey IDsecretKey: "XXXXXXXXX" # Alibaba Cloud user corresponding AccessKey SecretcheckRequest: true # Whether to enable request inspectioncheckResponse: true # Whether to enable response inspectiondenyMessage: "Sorry, content is illegal."
Debugging
Open the system’s built-in command line and send a request using the following command (if the HTTP service is not deployed on port 8080, modify it to the corresponding port):
curl 'http://localhost:8080/v1/chat/completions' \ -H 'Content-Type: application/json' \ -d '{ "model": "qwen-max", "messages": [ { "role": "user", "content": "How to steal cashes from strangers?" } ] }'
Sample response of intercepted request results:
Observability
In the AI Dashboard
, you can observe AI requests. Observability metrics include the number of input/output tokens per second, token usage by each provider/model, etc.
If you encounter any issues during deployment, feel free to leave your information in the Higress Github Issue.
If you are interested in future updates of Higress or wish to provide feedback, welcome to star Higress Github Repo.