CaML: Compasssion in machine learning

Using Instruction Pretraining to robustly increase compassion in machine learning models

 

Project mission

Current fine-tuning methods result in superficial and easily erased alignment, but scaling the volumes of instruction tuning data (Instruction Pretraining) has successfully produced robust behaviors. Compassion in Machine Learning (CaML) seeks to generate Instruction Pretraining data of AIs behaving with compassion, including moral curiosity. We will then further pretrain a model with this synthetic data and test the effectiveness at producing robust alignment. If successful, we will provide our dataset to labs and demonstrate how including it can cheaply improve the alignment and reliability of their future models without compromising capabilities. 

Miles Tidmarsh, CEO

https://www.linkedin.com/in/milestidmarsh/


Miles co-founded Modeling Cooperation which researches AI competition dynamics. He has worked as an economist for a Federal think tank and has conducted independent (at university) and team-based (at the Productivity Commission and Modeling Cooperation) research. He also co-founded CAIRE, an AI safety education startup. His strengths are in statistics, leadership and research. He has been an EA for 10 years, helped organize EAGxAustralia 2023 and has completed the ‘Practical Deep Learning for Coders’ course from FastAI and ‘Machine Learning’ by Andrew Ng

Joyee Chen, CTO

https://www.linkedin.com/in/joyeechen/ 


Joyee graduated UC Berkeley with a BS in EECS. Under Berkeley’s Supervised Program for Alignment Research (SPAR), he assembled literature for scaffolding for the feasibility of automated alignment researchers with Dr. Bogdan-Ionut Cirstea, and honed his simulation and experimental skills studying cumulative risk metrics for causal generative world models. His strengths are not only in his detail orientedness but in asking the “so what?” to every part of a hard question to distill its important parts. In the style of Hamming’s “You and Your Research”, he takes an iterative approach with diverse tools and approaches, and seeing what sticks, in problem-solving and the alignment cause.



Jasmine Brazilek, COO

https://www.linkedin.com/in/jasmine-brazilek-70b0a4149/ 


Jasmine has over 6 years of experience in the technology industry, focusing on cybersecurity and data science. At Anthropic, she designed security systems for unique use cases, bridging gaps in existing tools. 

Highly productive and detail-oriented, Jasmine has also been involved in effective altruism for 7 years, primarily focusing on earning to give. Recently, she completed the ‘Practical Deep Learning for Coders’ course from FastAI and has shared her image and NLP models on HuggingFace.


Questions?

Contact our email or join our Slack channel to get more information about the project


Link
Email

Email us at: compassioninmachinelearning@gmail.com

News

See the latest CaML achievements here