Use blind-autonomous mode only at your own risk and liability. Before enabling it, make sure you understand the kinds of code likely to be produced and executed (both the solution and the tests), and the general risks of running unattended code (e.g. unexpected file/network access, data exfiltration, cost overruns, or irreversible changes).
Remember not to use SingULAR in any mode for malicious or adverserial purposes, cyber-attack–related prompts, or jailbreaking attempts. If you do so, you assume full responsibility for your actions; the SingULAR authors and contributors bear no responsibility or liability. If your project touches on or approaches such areas, it is your sole responsibility to carefully review all generated code before each execution. If you are unsure whether your prompt might unintentionally produce such code, err on the side of caution: review every iteration and do not use blind-autonomous mode.
The blind-autonomous mode carries inherent security risks and must only be used in a sterile, disposable environment.
The code was tested in Google Colab. If you want to run it in another sterile environment, read the guidelines below, but do not run it in blind-autonomous mode on your personal hard drive — this is highly not recommended.
The recommended setup is a clean Google Colab runtime with:
- No access to Google Drive
- No access to local or persistent credentials — unless you are willing to take the risk
- Minimal scoped API access only
For example, my personal setup uses:
- a Cursor API key (required)
- a read-only Hugging Face token (for my use cases) I am fully aware of the hypothetical possibility that even a read-only token could be accessed by generated code and transmitted to Cursor or to another external endpoint.
Additionally, do not use blind-autonomous mode with API tokens tied to pay-as-you-go or usage-based billing accounts for any service.
This isolation model is intended to minimize risk in pathological or adversarial scenarios—such as code generated by Cursor misusing its own API access or attempting unintended network activity. While such behavior is unlikely under normal use, deliberately jailbreaking the agent may produce unsafe outcomes.
Running blind-autonomous mode outside of Colab is not recommended unless you:
- build an equivalent sterile environment (e.g. MicroVMs), or
- are an experienced Docker user who fully understands and accepts the risks
If you're eager to try blind mode anyway, a good practice would be to first see if your projects run in enough iterations to justify not reviewing them. Note that it is possible the fact that SingULAR sends Cursor reports on a specific task will fine tune the agent's user-specific resources, so running the same or similar tasks over and over might make SingULAR run in fewer iterations.
To reduce accidental misuse and limit liability:
- Both the library and example notebook ship with
ALLOW_BLIND_EXECUTION = Falsehardcoded - Enabling blind-autonomous mode requires explicitly modifying the supplied code
- This ensures users consciously opt in to unattended execution