Loading...

Postulate is the best way to take and share notes for classes, research, and other learning.

More info

An AGI Curriculum (Rough Draft)

Profile picture of Zayn PatelZayn Patel
Dec 3, 2022Last updated Dec 3, 20227 min read

I believe college is the place where students who want to work in deep technical or research fields go to meet others who aspire depth. I’m not convinced they all want to attend; some students who want to work in AGI might do better working as an apprentice to Greg Brockman. They attend college because it’s difficult to learn physics via a youtube video or build complex robotics without lab equipment and a nerdy professor to answer their endless questions. The impact of these technologies, and emerging ones like brain-computer interfaces and synthetic biology is so huge, departments aren’t suitable anymore. MIT should create a school of artificial intelligence like Stanford’s new climate school.

After seeing that John Doerr donated $1.1B to create a climate school, I thought about what an AI school would look like. What would students learn? Would every course be technical or would there be philosophy courses so students could debate what intelligence means and how the systems they’re building impact society.

I’ve sketched out three courses that I think should be taught in addition to classes on the first principles of how a computer and how chips work, the mathematics of deep learning and AI, and hardcore hacking classes so students become so good at programming they could beat Russia and China in a (programming Olympics).

Class 1: A history of AI failures

John Carmack mentioned on Rogan that the AI field is a few breakthroughs away from AGI. He also said that many of these breakthroughs could exist in published papers that researchers haven’t reviewed in years. Reviewing these papers could mean a discovery of a method that didn’t work because of a computational limitation in 1950 but could work on today’s supercomputers. I’m sure some students could build a language model that parses through every AI paper that’s been published since the beginning of the field and find at least five promising papers.

In addition to searching for old approaches that work on new compute, survivorship bias is the other reason this course is important. Reinforcement learning, natural language processing, GANs are AI architectures that sit on the tongues of computer science students but what about the approaches that failed? Who thought of them, why did they not work, are there discoveries to be made in areas nobody thought would contribute to AGI? A counter example is to the last question is neuroscience. There’s a cohort of researchers who think a deep understanding of the brain is necessary to build AGI. But DeepMind has shown that a systems level understanding suffices.

Conventional wisdom says to learn from failures and avoid repeating them. I disagree. There should be as many researchers building new approaches to AGI as there are researchers repurposing old and promising approaches. And if AI wants to model what good research looks like, a researcher will create a GPT-3 bot that summarizes why the approach didn’t work and log it into a public database so new researchers don’t repeat others who have recently failed. The bot can even label ideas as promising in the future but unable now. Meaning if a breakthrough in compute or algorithms happens in 2030, this approach should be tried again.

Class 2: Building language model algorithms

I imagine a future world where communication happens without words. Humans have brain computer interfaces that instantly transmit messages to each other and computers continue to receive signals through a shared network. But for now, language is the shared medium that connects everyone, humans with computers, humans with animals, computers with computers. For example, humans write code to a compiler which runs operations on a computers hardware and computers write readable code back to the human.

Language is the basis of human communication today so having an algorithm that can translate words from one language to another has obvious importance. Some humans are limited by their programming literacy so tools like OpenAI codex that allow anyone to type in a sentence of what they’d like a computer to do and sees code as the output reduces this barrier. Theoretically, this technology could create hundreds of more developers as long as the user has a vision of what they’d like to create. Jobs like accounting where hours of human input are spent auditing corporate financial statements could be replaced with an algorithm that is trained to read balance sheets. Customer service agents and stockbrokers too.

I think large language models could impact white collar careers in the 2020s the same way automation impacted blue collar careers in the 2010s. Early examples like DALLE-2 which create photorealistic images as good as a graphic designer in one tenth the time or GitHub copilot being used by developers for 30% of their programming work fit this trend. It’s almost unsettling that something as human as language could be replaced by a computer who speaks in zeroes and ones.

Class 3: Co-existing with AGI

There are two concerns many people have about AGI. The first is economic. People want to know how it will affect their jobs. The second is societal. People want to know if AGI will bring the abundance that OpenAI’s founder, Sam Altman, talks about or if it will destroy humans (another possibility he mentions).

I think a seminar structure where students write essays on their view of the economic impact will help them debate well. I’d like to see students create alternative stimulus packages to UBI and run a simulation on every students’ proposed package to see which performs the best. Best can be measured by societal and economic good. The happiness index and economic consumer sentiment are probably not the best measures for the future because both were created before the AI revolution. I don’t think it’s fair to measure a changed world based on dated indexes. That would be similar to measuring today’s happiness using the hunter gatherers happiness scale; we have access to much better questions and data today.

As for societal impact, I think the Social Dilemma is to consumer technology as [] is to AGI. In consumer technology, the builders can act in misaligned ways from the user. In AGI, the machine can be misaligned from the world. There are 2 billion computers in the world and if they posses misaligned AGI capabilities, they could destroy countries. Questions like: who has governance over the data and how it’s used, are there barriers to use AGI systems so autocratic countries or immoral people don’t have easy access, how disproportionate do the have and have not’s become if AGI is created, are good beginning questions to discuss.

---------------------------------------------------------------------------------------------

There are other courses like, how to read technical news that I think are helpful because the velocity of discovery in AI is so high. But it’s not until students build a strong base of programming and mathematics that they can begin thinking about how to speed up their digestion of new sources. So those courses are probably for the third or fourth year undergrad.

I think each course I’ve mentioned here should be an add-on to the first year curriculum. It’s critical that for a technology this powerful, we’ve biased the future builders’ minds to think global beneficialness first, data second, profit third.


Comments (loading...)

Sign in to comment

Raw ideas

Writing to figure something out