James Norris is a Founder & Executive Director at Center for Existential Safety based in San Francisco, California, United States, North America. For as long as I can remember I've been obsessively searching for the best ways to improve myself and the world. I deeply believe all sentient beings deserve to flourish. Naturally this has led me to lead an unorthodox life. I have a polymathic spirit and believe "work is much more fun than fun" (Noel Coward). I started my first microbusiness at 6 and have been working relentlessly ever since. All in, I have experience in 20+ fields. But for the sake of simplicity, you can say I've primarily focused on (1) behavior change and (2) social change. Life highlights: - Co-founded or helped build 27 organizations (59% still operating for 200+ cumulative years earning millions in revenue) - Learned from 23 jobs and internships in startup, corporate, and academic environments - Graduated from the University of Texas at Austin with 3 majors, 4 minors, and 2 programs - Experienced 4,040 "life firsts" in a 26 year-long experiential learning experiment Most interesting work: - Developed a protocol for helping people optimize their lives as quickly and cost-effectively as theoretically possible - Co-founded Effective Altruism Global, the international conference series for effective altruists and the then largest in the world highlighting AI safety - Founded Self Spark, the global lifehacking event series - Co-founded Polymath Project, a nearly launched physical university for thousands of young Leonardo da Vincis - Helped Stanford ChangeLabs pioneer a framework for social change called "system acupuncture" - Helped RoadStoryUSA develop a ventured-backed museum/theme park edutainment center - Helped OnHand Agrarian invent a new type of high-density, sustainable fish farming Research areas: - Strategic life optimization - Behavioral shoves (anti-nudges) - Systemic / systematic behavior change - Systemic / systematic social change Personal plea: After 25+ years of consideration, I believe that if we successfully grow AGI we will all die soon after. Therefore, humanity must pause all existentially risky AI development immediately. Ideally, years ago. 1. We must prove AGI can be safely grown. 2. Then we may extremely carefully grow it. 3. Then we should fairly distribute the unimaginably large bounty to all. If we skip step #1 we will likely cause our own extinction. We need effective international governance to help us wisely navigate transformative technologies like AGI. Let's create that as fast as possible, not AGI. Sign and spread the petition: iaiga.org. Join the fight at: existentialsafety.org. We must not gamble with humanity's future.
Current Position: Founder & Executive Director
Company: Center for Existential Safety
Location: San Francisco, California, United States, North America
Social: Professional profiles available
Network: Extensive professional connections
