Privacy & Security
Last updated:
Oct 3, 2024
At CuratedAI, we prioritize the privacy and security of your data at every step of your interaction with our tool. Below, we outline the robust measures we have implemented to protect your information, ensuring compliance with data protection regulations, including GDPR.
Summary
All data is securely stored and processed within the EU (Germany and France).
If you're using our paid services, your data is never used to train our AI. We only use documents uploaded by free users to train our proprietary models and we always anonymise them beforehand.
Your data is not shared with OpenAI or used to train their models.
We automatically delete your documents after 48 hours, unless you ask us to keep them longer.
We use top-tier encryption (AES-256) and secure cloud services to keep your data safe.
Hosting and Storage
We leverage industry-leading cloud providers to store and process your data in secure, geographically appropriate locations:
EU Data Hosting: All data you upload to CuratedAI is securely stored within the European Union in compliance with GDPR. Document uploads are hosted in AWS Frankfurt, ensuring EU data residency, while our AI models run on Azure France data centers. Azure's OpenAI service guarantees that no customer data used in processing is shared with OpenAI or stored outside of the EU.
Data Retention: We securely delete all uploaded contracts and associated documents 48 hours after completion, unless you choose otherwise. We can adjust this period upon request for users with extended contracts or custom retention needs.
Custom B2B Solutions: For enterprise clients, we offer dedicated, segregated instances of our platform for data storage and processing. Each instance is isolated, ensuring no data is shared between tenants and stored exclusively on infrastructure dedicated to the individual client.
Security
Our platform implements state-of-the-art encryption and security protocols to protect data in transit and at rest:
Encryption:
All communications between your browser and our servers are protected by SSL/TLS encryption, preventing unauthorized access to your data during transmission.
Data at rest is encrypted using AES-256, one of the most secure encryption standards, ensuring that data cannot be read or accessed without proper authorization.
For Azure-hosted data, double encryption is available using customer-managed keys, further enhancing security for sensitive client data.
Cloud Provider Security:
AWS: Data stored on AWS Frankfurt is protected by multiple layers of security, including encryption at rest, DDoS protection, and ongoing monitoring for potential threats. AWS provides compliance with several international security standards, including ISO 27001, SOC 1, 2, and 3.
Azure: Azure follows its own set of rigorous security measures, including multi-factor authentication (MFA), secure identity management, and 24/7 continuous monitoring. Azure does not retain any customer data processed through its OpenAI service and applies strict data residency rules, ensuring that data remains within designated EU boundaries.
Model Training and Data Use
We are fully transparent about how your data interacts with our AI models:
No User Data Used to Train OpenAI: For all users, whether free or paid, none of your data is sent to OpenAI or used to improve their models. We use Azure’s OpenAI Service, which ensures that your inputs, outputs, embeddings, and training data:
Are not available to other customers or OpenAI.
Are not used to train or improve Azure or OpenAI models.
Are not used to enhance any Microsoft or third-party services without your permission.
Azure OpenAI operates fully within Microsoft’s Azure environment, without interacting with OpenAI-operated services like ChatGPT or OpenAI’s API. You can learn more about the Azure OpenAI Service measures for data, privacy, and security at their dedicated page here.
Free Version: We use documents uploaded by users of our free version to train our own AI model. Any internal model training is conducted exclusively on pseudonymized or, where possible, anonymized data to improve our proprietary AI models without compromising user privacy.
Paid Services (Credit-Based Reviews): For users of our paid services, no data from document reviews is ever used to train our models. Your data remains confidential and is deleted according to our data retention policy unless otherwise specified by you.
Data Pseudonymization and Anonymization
We take additional steps to ensure that your data remains private:
No Personal Information Use to Train AI: We do not use any personal information provided by users for model training. Our models are trained only on de-identified and anonymized datasets where necessary.
Data Anonymization: Before any data is used for internal purposes, such as improving our models, we apply thorough anonymization techniques to remove any identifying information. These techniques include both neural models that perform NER (Named Entity Recognition) and rule-based models. This ensures that your personal or sensitive information is never exposed or used inappropriately.
We continuously update our practices in line with the latest regulatory requirements. If you have questions or comments regarding our privacy and security measures, please contact us at founders@curatedai.eu.
For additional information on how we handle personal data, see our Privacy Policy.