Fireside with Sam Altman

Shawn:

Welcome, all of you. Thank you for joining us today at the first inaugural Big Compute conference held in the beautiful SFJazz in downtown San Francisco. We’re going to be talking about a lot of topics today, but the general theme today is the world is seeing big data, but now we’re seeing the advent of big compute, and we’re going to talk about how it’s changing the world.  Here with us is Sam Altman, CEO OpenAI. Thank you very much.  

Sam, let’s kick this off with a comment that you recently made at an address at Google where a participant asked you what’s more valuable — big data or open source machine learning algorithms. And you said, I think you said the following, ‘I used to ask companies how they’re going to get a lot of data. Now, I ask them how they’re going to get a lot of compute.’

Can you talk a little bit about what you meant by that?

Sam:

So, I certainly think data is still important and still valuable, but it turns out that the most impressive advances that we’ve had in the field of AI research I think have been more about massive compute than massive data. It turns out there’s lots of data available on the internet. There’s also this sort of, in some cases, this E equals MC squared equivalence between a lot of compute and a lot of data because you can use a lot of compute to generate a lot of data.

So if you think about one of OpenAI’s results from last year with Dota 2, we beat the best team in the world with no data whatsoever. The entire thing was the agents self-playing each other, exploring the environment, trying what worked, stopping what didn’t, and sort of good RL algorithms to do that. And there are other cases like that as well, where we need very little data, we have a lot of compute to run a lot of simulations.

Now, for a lot of businesses, of course, it is important to have data, but for the field in general, I suspect, data will be the least important of those three categories.

Shawn:

Can you talk a little bit about the big opportunities you see emerging because of the advent of big compute?  Artificial intelligence, we’re going to talk about that, but maybe other opportunities you see.

Sam:

Honestly, I have such incredible blinders on that I care about that more than everything else put together, and there probably are lots of other opportunities, but that’s the one I think about 16 hours a day, so they aren’t coming to mind.

Shawn:

Let’s talk about that a little more.  There are a couple of milestones you posted on OpenAI.  One of them was the Dota 2 win. Two, human dexterity with a Rubik’s Cube.  And the other was learning with a recent kind of literacy of this model being posted recently. Will you talk a little bit about some of these exciting milestones you see happening with OpenAI?

Sam:

Yeah, there’s so many.  Maybe we’ll focus on language because I think that’s gonna be the most applicable to sort of all businesses. I think one of the most exciting developments in the field in the last few years has been how good natural language is getting — how good AI for natural language is getting.  

I think we are going to see an explosion in the next few years of systems that can really process, understand, interact with generate language, and I think it’ll be the first way that people really feel powerful AI because you’ll be able to interact with the systems like you do by talking to somebody else.  You’ll be able to have dialogue that actually makes sense. Computers will be able to process huge volumes of text that are sort of very unstructured and you interact with that system, whatever way you want and whatever way you do, you’ll get what you want.

Shawn:

Can you talk a little bit about the compounding effects of the mundane, that we’ve got short-term mundane effects, we have long-term fantastic things with AI — this idea of kind of feeding stuff into the smart speaker systems or digital assistants.  Do you see anything happening with AI that my mom watching today might be able to understand, that she might feel compounding in her life?

Sam:

I think one big one is speech recognition. Most people remember how bad it was five years ago and people that use Siri or whatever have noticed that it gets a little bit better every year. Actually, not a little, a lot better every year, and now it basically doesn’t mess up, even in difficult environments, or it doesn’t mess up appreciably more than humans mess up when they’re trying to understand.  So I think that’s one that seems to resonate with a lot of people because it’s in recent memory.

Shawn:

When somebody comes to your website, to OpenAI, and they’re trying to get a sense of what you actually do, I think my mother would have a hard time understanding what that is, but I wonder if you could talk a little bit about how what you’re doing affects the average common everyday person.

Sam:

Well, we work on a very long time horizon. I think there have been three great technological revolutions so far in human history: the agricultural revolution, the industrial revolution, and the computer revolution. I think we are now in the early innings of the AI revolution, and I expect that one to be bigger than all three previous ones put together.  Thinking, understanding intelligence like that really is what makes human humans much more than our ability to get physical stuff done in the world, and so, I think this is going to be a huge deal and impact life in a lot of ways.

I think, I’ve been very inspired, as many others have, by the example of Xerox PARC, and the technology they created was enablement for the computer revolution in a lot of ways. Alan Kay, who is one of my research heroes, used to sort of berate me nicely for saying these companies never think big enough.  ‘At Xerox PARC we created,’ I think the number was like by his count ‘30-trillion dollars the value. Who’s thinking on that scale?’ And my hope is that the work we do enables 300-trillion dollars of value for other companies and, you know, we’ll also hope to capture some of that value ourselves.

But we really want to figure out this grand challenge of how does intelligence work, and how can we make that available to people.

Shawn:

Let’s talk a little bit more about that revolution. Revolution can be scary. You mentioned in a recent chat with the CTO of Microsoft about the book called Pandemonium, and how people from the Industrial Revolution were quoted and first-hand accounts about how machines are going to take over the world and we’re all going to die and these problems are going to happen.

But you’ve been a very optimistic voice in the middle of all that.

Sam:

Well, they all said that hundreds of years ago now and we’re still here. We’re still all very busy. We still have work to do. I think it is both true that this time is different and also everything has happened before and everything has happened again. This will be different. General intelligence is a powerful thing. I also believe that as hard as it was the time of the industrial revolution to imagine the jobs of computer programmers working with big compute, it’s hard for us to sit here and think about what the jobs on the other side of this will be, but human demand, desire — Its creativity seems pretty limitless and I think we will find new things to do.  Betting against that has always been a mistake.

Shawn:

Talk a little bit more about the revolution that you see happening. You talk about — multiple talks about how AGI is kind of a stupendous and difficult thing people talk about and imagine.

Sam:

I think it’s very hard to think about what the world definitively looks like when computers are more intelligent in some ways than humans, or when computers can do most work that humans can. So, the only prediction I can make with confidence is that things will be very different. And anyone I think who says we’re going to keep everything the same is lying.  But, although change is inevitable, we can work really hard to make sure the future, although it’s guaranteed to be different, guaranteed to be different is better.

Shawn:

In that same address with Microsoft you talk about how the computers that we’re going to see in the next five years are going to be mind-blowing. If we take 86 and GPGPU stuff off the table and even the perennial question of whether a quantum computing is here the next five years, do you do you see anything else you see mind-blowing within the next year?

Sam:

So, I think it is true that Moore’s Law is slowing down. People have all kinds of ideas about things they’re going to do to keep it going.  Maybe they work, maybe they don’t. But the version of it that is important for AI, which is, for AI specifically, how big can we make our biggest models, however we get there, you know, plug in a bunch of computers together, optical interconnects, whatever it takes to be able to sort of train these massive models. That has been growing about 8x per year for about eight years now. And I think it’s gonna keep going like that for about five years. So again at this point, I have this, like, narrow focus on this one thing that’s really important to me. There’s probably a lot of other things that are gonna happen for compute, but the question is are we gonna have bigger and bigger computers to train neural networks on, and the answer is yes, and that’s super exciting.

Shawn:

There’s a big gap between the “haves” and the “have nots,” and it’s getting wider, especially with the advent of big compute. The people that have access to this infinite amount of compute is shrinking to a smaller number of people.  What do you see happening in this trend as you see startups wanting to get involved here, but do they really have access to this and how does that affect admin to this?

Sam:

I think, yeah, it is a huge problem.  There will be a handful of companies like OpenAI that can train the largest models, but once they’re trained they’re not as expensive to run, not even close, obviously, and so I think what’s gonna have to happen is there’s a few people that can train the large models and we figure out how to share them with other people who can’t train them, and I think that’s how you resolve to ‘have’ and ‘have nots’ issue.

Shawn:

I have a favorite author that talks about history standing at a hinge point and implying that major milestones and events in history make a huge difference when you’re standing.  You don’t realize at the time, but you stand back looking at that. A lot of people like to ask people like you to look back and predict the future, but if you were to look back 20 years from now, is there anything that you think we could characterize as a hinge point in that way?

Sam:

I think there are very rare occasional decisions where a small group of people tired at three in the morning in a conference room make a 55-45 call and it has a massive influence on the outcome of history, but those are extremely rare.  Most of the time, it’s squiggling around a curve that’s gonna go up and to the right and sort of unfolds technological future, and you know, you get a little bit wrong, you get a little bit right, but eventually progress continues. So in terms of those few incredibly consequential decisions, I mean, everyone has got a story they love.  The one I love is the Russian military officer that decided not to push the button to launch nuclear bomb when he thought that they were likely under attack and it turned out to be an instrument error, and that one decision by one individual, when he really, all of his training and policies are he should have pushed the button to launch, that’s like a case where the world could really have gone the other way.  And to be perfectly honest, I think a decision like that is bigger than all of, in terms of one hinged decision, any single decision tech companies usually make.

Shawn:

One of the most intriguing things about OpenAI is, I think, the mission that you posted online, and specifically you talk about making OpenAI safe and beneficial for humanity.  I think a lot of these hinge points often tie to someone’s moral compass and what guides you. There’s a lot in the news about recent senators, one in particular who votes based on being guided by moral principal, their religion, in a world where there’s a lot of bad actors or even just plain amoral actors.  How do you help people stand at that place that could be a hinge point, potentially.

Sam:

I mean, I think — We have, like, a set of principles.  We try to write up in our charter what those are and we like the public to hold us accountable to them. I think people can disagree with the charter, though.  As the stakes get higher and higher, no one organization and certainly no one person should be making decisions for what society, what the new social contract looks like and how this technology gets used and sort of how we share governance and economics. I think a thing that we will move to in the coming years or decade is more and more of our decisions will be influenced by an advisory board that we’ll need to put in place, of people that can kind of represent different groups in the world, which right now we don’t have.  I think as we get closer to something that feels like AGI, none of us deserve that right nor want that responsibility to single-handedly determine how we’re going to live.

Shawn:

A year ago, you mentioned some of the world’s hardest problems could be best solved by artificial intelligence, and I think you brought up climate change as an example of that.  Can you talk a little bit about the class of hard problems you think would be best solved by going after AI first?

Sam:

Well, I think there are a lot, but I would like to talk about the class of problems that I think aren’t, because I think one thing that the technology industry gets wrong, and I myself am often guilty of this, is believing that technology solves all problems.  First of all, I think technology creates a lot of problems, too. I believe it’s net good for the world, but there’s a balance sheet, and second of all, I think there’s a huge set of problems around public policy and people’s optimism for the future and the kind of betrayal they feel by the country or sort of the American dream that kind of somewhat got taken away that most people in Silicon Valley say “well, we just need better technology — we just need AI and we’ll solve that,” and AI will help, and better technology will help, but I think these are policy and governance and leadership issues that it is a mistake for the industry to say we know better than everybody else about how to solve it.

Shawn:

Do you have some examples of that you think that we should focus on?

Sam:

One very basic one, I would say, is that the current generation of young people is the first generation, if you believe the polling which, who knows, in American history to not think their lives are going to be better than their parents.  So this has worked for 240 years, and now it doesn’t, and I think starting 34 years, and I think starting with why it doesn’t is a great question to start with.

Shawn:

How do you see artificial intelligence helping with some of these problems?  For example, this is back to the theme of predicting the future, which is unfortunate, but there’s a Recode interview you gave where you said humanity will, at some point build our visual intelligence that surpasses human intelligence.  How do you feel like that affects some of these kind of underlying policy issues?

Sam:

Well, what I think I said, or what I meant to say is we are guaranteed to eventually do that if we don’t destroy ourselves first, which is possible. You know, I think the world is an unstable place. But given enough time, I think biological intelligence should always end up creating digital intelligence, which is likely to be superior in many ways.  Whether or not we ever create digital consciousness is I think up for debate, but digital intelligence, given enough time, is for sure. And I think that just makes everything super different. I think humans are really good at a lot of things, but computers and AI turn out to be really good at a lot of things, as well. And my most optimistic hope for the future is that humans and AI are some sort of hybrid merged human AI together is just sort of far more capable than than either on their own.

Shawn:

Let’s talk a little bit about barriers to adoption. There are those online that have asked questions in advance about do you see the AI winter coming or what are the barriers that are happening.  Can you talk a little bit about the technical barriers to progress that are not just technical in nature.

Sam:

Yeah. Well, I think the technical barriers are still huge and it’s a mistake to say they’re not.

We will squiggle around the exponential curve of progress up and down and there will be moments in a down that could last months or years where people are like, the AI winter is here and, you know, there’s a classic person that loves to call the top. It markets and research progress and whatever and it’s like a friend of mine that I used to think was this very brilliant predictor of recessions, then I realized he has forecast like 18 of the last two recessions. And there will be a lot of people that will say finally AI progress is over, and there will be periods that are difficult.  There will be periods where we’re walking in the wilderness. And at some point people will be right, but people are so desperate to say now it’s gonna stop working, and they have been for the last eight years, and it’s just been this relentless upward elevator of progress. It is possible — In fact, I would say it’s probable, that we’re missing some big ideas to go all the way. And I have my own thoughts about what they are, but honestly, they’re speculation and predicting the future is hard. What I can say with certainty is the things that we know work are going to go a lot further.  That’s an exponential curve. The flood of talented people, if you go ask like any really smart 18-year-olds studying computer science in college what they want to work on, they’re very likely to say AI. The flood of talent into the field is not an exponential so that’s two together. Algorithmic gains sort of keep going pretty well. And so, I think there are things that will work better than we thought and worse than we thought and we will hit some dark periods of stumbling blocks. But, the biggest miracle of all is that we got an algorithm that can learn full stop, truly, legitimately, we have an algorithm that can learn, and it seems to keep scaling with more compute. In my whole career, the central lesson has been that on scale, scale things up more than you think, and when people see a curve that’s going like this, and it stops here and they’re asked to predict, you know, the next 10 years of progress, my default assumption is to sort of believe that the curve keeps going for a while at least.  And most people’s default assumption seems to believe that it’s going to keep going on that same exponential for more months and then perfectly flatline, which is sort of a weird framework and I don’t think what’s going to happen.

Shawn:

Talk a little bit more about meta learning, about the ability to do deep learning and machines that learn.  You’ve mentioned that this was an exciting development recently, if you could share a little bit more about that.

Sam:

I would say just more generally, generalized learning is exciting of many forms.  You know, algorithms that can learn their own problems, that can go off and explore the ability to learn a lot about one task and apply it to another task, the ability to pre-train these big models and then use them to solve other problems with their knowledge of the world. I think human intelligence is very near, is very close to this concept that you can take existing information and thoughts and apply them quickly to new problems. And, it’s actually, I think it’s remarkable how quickly humans can learn. It takes a long time to train up — maybe it takes like 20 years to get pretty smart, but then you can learn a new thing very fast and then you can apply that new knowledge that you’re told once or a few times to solve a new problem in like three seconds. And the fact that we’re beginning to see that happen with AI, I think it’s quite remarkable.

Shawn:

I was most impressed with just looking at the amount of investment required for the Dota 2 experiment to be able to see how much compute power was sort of that problem. Do you see that accelerating or do you see that there’ll only be a few companies that have the power to invest in that scale?

Sam:

You know, I just had this wistful nostalgia for when our compute bills were that cheap. It’s just gonna keep going.

Shawn:

So, I guess that opens the room for larger geopolitical partners, governments to be able to invest in this way.

Sam:

Mm, maybe, but I think, you know, the sort of the ability for technology to lower costs and fewer and fear people have more and more influence is remarkable.  And I think, if you think about the kind of big iron engineering projects of the past, Manhattan project, the Apollo project, something like that — Those had to be done by nation states — so much money. And I think we actually can do it without.  We do need a lot of money, but, like, not government scale.

Shawn:

There was a statement you made about avoiding the wrong thing.  In a kind of famous 2016 New York article, you said “some things we will never do with the Department of Defense.” So I wonder, could you discuss what you mean by that? I mean you have some nation-state actors that people worry about doing the wrong thing or…

Sam:

Okay, well first of all, I’d like to say there are a lot of things we would do with the Department of Defense and I think the current mood through some parts of Silicon Valley, which is like “we hate the US and we hate the US military even more.”  It’s just an awful stance. And there’s plenty of times like, where if asked to help our country, we’d be proud to do so. There are some things that I would say we wouldn’t do and, you know, we have general thoughts but the instrument ones are in the gray area in between we have to make case-by-case decisions. But I think in general we, the United States citizens, Western Society, whatever you want to call it, are better off if the United States government remains a powerful force in the world than if they don’t, and we are happy to help.

Shawn:

In a very different setting, you said once that growth masks all problems that you were talking about just investment and startups and growth in general.  Is there a version of that that applies to artificial intelligence?

Sam:

Sure.  So this is like the good and the bad of scale. Scaling things up works really well, but it also papers over other problems. So in companies, that’s obvious, you know, you can sort of cover over deep seeded cultural problems because everyone’s excited by the growth, and in AI, it may be that because scale keeps working or not doing as much research on more efficient algorithms as we should.

Shawn:

Return to the thought of being safe and beneficial.  Thinking about an early warning system — sometimes when, I’ve never been skydiving, but there’s this concept of ground rush where you’re approaching the end and you realize that it’s getting really close. And in some of your talks, we talked about the time frame for AI adoption is not being very important. It’s a blank of the eye and in the eternal scheme of things thirty years versus ten years.

Sam:

I think that is the dumbest debate in Silicon Valley.  There’s a lot of competition for that title and I still think it wins.

Shawn:

Do you see any early warning systems like that for us, like feel like we’re going to go splat, like we start seeing ground rush with these changes.

Sam:

I mean, there are a lot.  I think when the system can start doing things like saying, you know, ‘you asked question X it seemed like you really meant Y — is that accurate?’ and being right most of the time.  That, to me, will feel like a moment to start taking things really seriously.

Shawn:

Do you see any early indicators right now if things were — You mentioned a little bit about what OpenAI’s doing.  Any other companies that you see doing things that you think those are major inflection points and growth?

Sam:

I mean, I think there have been many great results throughout the field over the last 12 months, but there’s not one that’s like ‘alright, this is the definitive thing.’

Shawn:

Talk a little bit about — There’s been a lot of news around the recent investments in OpenAI.  A Microsoft recent large investment was one of them. Some people think, like, how is that going to be used?  By buying more compute power? Do you see more like what kind of themes we just see that investment–

Sam:

Company data and people.

Shawn:

There’s a talk you gave, I think it was that same Google address where you talked about the most important invention that you saw happening was the joint stock corporation.

Sam:

Yeah, I think — someone asked me about the most important invention of the industrial revolution and, what I said there is, you know, studying that as a kid, I would have had a very different answer and I would have tried to think really hard and pick the one specific invention that’s had the most impact on the world since then.  Studying it as an adult, and maybe particularly given the career that I chose or used to choose, I think there was this one thing that enabled so much which was the British government decided that they would sort of grant second order sovereignty to companies and that they could have this legal thing where a bunch of people could be aligned and you could have kind of capital and people working for it and there was sort of — you got this new legal structure.  And that is such a powerful idea. Before that, you were basically limited towards small groups of people that could trust each other and then all of a sudden you had this entity that could glue a bunch of people and capital together and it had this incentive structure where everyone wants the share price to go up where everyone’s incentives were in line. I really do believe that incentives are a super power, and if you can get incentives right or make incentives better, that’s the thing you should work on. And so the British government invents this one single thing that enables this incredible boom in terms of not just coming up with inventions, but making them great and getting them into people’s hands, and that is so under appreciated of how valuable that was.

Shawn:

It’s a little bit about ecosystems and helping people align their networking effects. Do you see that same thing happening today with the artificial intelligence world, do you wish that these people would align together to be able to drive that same kind of economies of scale?

Sam:

I’m pretty happy with what we’re doing there. I’m pretty happy with what we’ve been able to align. I think more of that would be good but it’s off to a good start.

Shawn:

Can we talk a little bit about, for those of you with what’s going on right now with OpenAI, what you see happening?

Sam:

Honestly there’s no way I can make the sound exciting we show up every day and we bang on our computers and we try to get like algorithms to work and then we find out I was some stupid bug and we all get upset with each other. No it’s a little bit better than that. We are trying our hardest to discover, what makes intelligence work and we are trying to not think about how we get our applications a little bit better next year, but over sort of the long arc of history what it takes to make machines that truly think and we do all sorts of things along the way but that’s it.

Shawn:

You’ve mentioned a few times even though you have the optimistic voice that there’s a few areas where you worry about risk with the advent of AI, security being one of them, you feel like that would be, I think you called it eye-watering risk. Would you talk a little bit about some of the risks that we hope to mitigate that you think we really have to address the next couple of years.

Sam:

AI specifically or sort of other…

Shawn:

Yeah, AI specifically.

Sam:

I do think there’s one application of A, I don’t think of the next in the next couple of years but on a longer timeframe, is that the threat that advanced AI will pose to cybersecurity I think is likely to be huge. Even without that cybersecurity is difficult so getting that I think that’s like a great problem to focus on.

Shawn:

I’ve heard quantum computing used in the same breath that that would disrupt things in a major way. What do you see happening in this couple of years – like you used to have a broader view of investment capital the most people?

Sam:

Yeah, I mean, people love to talk about quantum computing breaking encryption and I think that is like, not a thing to be super concerned about. I think the number of logical qubits that it takes to do that are far enough off that we’ll know when they’re getting close and I also think that we have plenty of time to transition to quantum resistant encryption.

Shawn:

I wonder if we could zoom in: there’s a lot of startup founders here in the room with us today and I wonder if we could zoom in a little bit on this idea of mimetic innovation. You use this phrase a little bit in some of your previous talks that you worry that…

Sam:

You are extremely well researched by the way, I’m very impressed

Shawn:

…you worry a little bit about copycat innovation that people just copy the idea before them and that’s not true innovation. In the artificial intelligence world do you worry about that or is it truly groundbreaking enough that it’s not an issue?

Sam:

I mean, I think it’s like basically deeply wired into human nature. And I think in any setting 90 – in the high 90s percentage of people are gonna be super memetic and in my experience it’s difficult to stop but it’s okay because the other two percent of people drive the world forward, so most people will be sort of very incremental/metic and a few people will be truly original thinkers and that’s all it takes – lucky for us.

Shawn:

We had a VC panel here just before you started where we talked about what their investing in and the buzzwords of artificial intelligence where you just anoint a simple algorithm that you created as artificial intelligence. What do you, for someone to truly say I’m doing AI in a startup, what kind of bar do you have to hit for that to be true?

Sam:

You know, it’s true. They don’t invite me on VC panels anymore because I can’t keep my facial expressions hidden. But every few years there’s like some buzzword, you know, we’re gonna do this with social, we’re gonna do this with podcasts, we’re gonna do this with crypto we’re gonna do this with AI and and basically, I think by the time you get say three buzz words in the first two sentences with startup hits, you can pretty safely ignore it. And even one you should be a little bit skeptical unless they’re clearly doing it. So the number of startups say we’re an AI driven X and are actually AI driven it’s I don’t know, 1 in 20 or 1 in 50, something like that. And and the lesson here is, startups push themselves, however they think it will work, VCs often fall for it. The good VCs dig in and don’t fall for it.

Shawn:

I wonder if we talk a little bit about GPT2. You unveiled this but didn’t completely, put it out in the open the public domain.

Sam:

We did it actually we just did a sort of staggered release

Shawn:

It’s a topic that all of us care about right now with recent politics et cetera. You look a little bit about the reason why you unveiled that and maybe if that pre-signals anything else.

Sam:

Um, I do think you should expect us to do more stage releases like that. When we develop a technology that we think is probably safe to release but we think we’ll eventually become unsafe as it scales up, we’d like the world to get a heads up and I think the world had gotten used to over a period of time, photoshop images and now people know not to trust them but people do still trust text press releases news whatever for the most part and you know, like a new thing that I think will happen is we’re not that far away from entirely fake videos of world leaders saying whatever you want, people tend to trust that too. And I think the world needs time to adapt to any reality. And part of our goal with how we handled that release was to say there is a change, we’re going to get through it, but you need to think about this as a possibility, you being regular people who are reading internet, policy makers, whatever, and that was our goal with how we did that. My guess is that someday far in the future when world leaders given an address they cryptographically sign it. And we just get used to that, you know, all videos can be fixed or or they you know, they only tweak it from their account or whatever but there will somehow be verification and I think the world needs time to adapt that.

Shawn:

You mentioned in a previous address about the balance between politicians thinking about short term economic impact on life versus the world as it will emerge in 20-30 years and trying to balance these two things which are two very different issues. Do you think our world leaders think enough about artificial intelligence and what should be on their minds as they think about their constituents?

Sam:

I mean clearly not, but the list of things I would like our world leaders to do differently before they became AI experts is quite long.

Shawn:

Anything in particular you want to raise?

Sam:

Not in front of this many people. I always get in trouble for that.

Shawn:

That’s great. Well, we have some time for a few Q&As from the audience. Is there anything you want to say to the audience before…

Sam:

No that was really fun. Okay, awesome. Thank you so much.

Shawn:

We’re gonna open it up to you in the audience to ask maybe a few questions for Sam if you want to. Just raise your hand, if you have any questions you want to ask.

Sam:

I’m also happy to talk about AI more relevant to business today if people are interested.

Question from the audience:

So I wear a bunch of different hats and food tech spaces one of them. Y-Combinator seems increasingly interested in the food tech space as evidenced by recent investments? We advise and invest in this [in-audible] Meats being one of them. These companies in cellular agriculture and such are trying to solve the scale up problem. A very multifaceted, with combination of micro-fluiddynamic experiments, compute, supply chain etc. In the context of this fireside chat, where do you see something like, Open AI where there’s maybe limited data available, but they’re trying to incorporate these things into how they’re solving their problems?

Sam:

I have not followed food tech closely. I really want it to happen but I’ve been a vegetarian my whole life. I’ve tried to eat fake meat. I’ve decided I just really don’t like the taste of meat and I have not personally been super involved in the space.

I don’t have a sense of what the biggest problems are. I have not spent a lot of time with the companies but I’m really happy it’s happening. I just don’t have any expert opinion to offer about how to apply AI either. Sorry.

Question from the Audience:

Thank you, my name is Addison Snell. I’m with Intersect 360 research. I like that you were talking about concepts like when AI becomes more intelligent than humans and then you separated that from consciousness. Do you have a definition of what constitutes intelligence for those content of that context?

Sam:

Something about the ability to learn new concepts based off of existing knowledge the ability and maybe something about the ability to sort of learn them fairly quickly. We talk about the right metrics here, but I think intelligence is deeply related to the ability to learn which is why I think we’re going to get there because we have algorithms that can learn

Shawn:

Thank you next question

Question from the Audience:

Yes hi I was wondering in terms of AI and international collaboration, what do you think Silicon Valley should be paying attention to more in terms of AI developments outside if there’s any and how do you think globally we should be working more together in terms of whether on the technical or governance side AI.

Sam:

So right now, I think it’s pretty collaborative and researchers from around the world published their work and work together quite openly. I’m nervous that is on a path to get much harder. I certainly think the optimal long-term outcome for the world is close international collaboration, and not sort of an arms race between nations and I hope that will happen but I’d say it’s well out of the area where I feel like I’m an expert and could make a confident prediction. I think it’s clearly the best long-term interest for the world and one of our goals that open AI is to push policy that direction. And I’m somewhat heartened by how it’s gone so far. I think one of the really great values of academia is, for all for all of the flaws and faults of academia I think it has done better than any other segment at long term open international collaboration around ideas.

Question from the Audience:

Hi, I recently joined this industry and I saw that there’s a big diversity issue

Sam:

Huge,

Audience member:

Right? So I wanted to know what is your opinion about how you want to solve this issue and how do you think it’s affecting AI in particular?

Sam:

One of the things that we’ve done is to start what we call OpenAI scholars, which is a way to get really talented people from whatever’s backgrounds exposure, we mentor them we teach them and that helps that the rate of a handful of people per year and that is kind of our own capacity but that’s clearly not enough to solve the problem and I do believe that the people who build these things not through any intentional fault but just from the way it works put a huge amount of themselves in their own views of the world into this systems, and so you’ve got to have more diverse input. And yet if you look at sort of the rates of graduating PhDs in AI,iIt’s incredibly striking in terms of being non-diverse and so I think what has to happen is we need a lot more versions of OpenAI Scholars-like programs and people in the field need to sort of commit some other time to mentoring diverse people. And also, we need to really figure out how to change the ability to get new people into the field, rather than wait for the pipeline of AI PhDs to catch up which is a many many year process. I think if we don’t do that, we will end up having, no matter what we do, no matter what openAI or others do, to sort of get really good representation and advice, the people that built the system always have a huge amount of influence over what actually happens and again not through any negative intentions. And so we’ll end up in a sub-optimal world.

Sam:

All right last question the idea

Question from the Audience:

Hi there. Quick question on somewhat related to diversity, but this is actually related to data and approaches by different countries.

So my questions relate to sort of like how you look at for example China who has right now, they’re doing a lot of surveillance. So in terms of at least visual data for AI, they have a lot more than what the US currently seems to have at least that’s what the press is putting out there. What are your thoughts on the different approaches in the long run and the future for that…

Sam:

I used to really really worry about this. As we talked about earlier, I have shifted my own thoughts to thinking it’s gonna be more about a compute edge than a data edge I and I certainly hope that’s true because otherwise governments like China have a huge edge in terms of more data than anybody else. I don’t think the society we want is a super high surveillance State at least not the one that I personally want and the trade-off of that is just less less data. However, the Internet’s giant, and we forget just how giant the internet is even in a world where we do need a lot of data. I think we can get it and the edge is mitigated.

Shawn:

So I guess I have one final question just wrap up the theme of the day. You’ve given us some great perspectives looking at the last couple of years. If you were to look at the near term horizon, the next one or two years the most exciting developments you see happening, what do you most hope for in terms of development that you think when we stand on the stage in the next one or two years and say “that was really a remarkable moment”. What do you most hope for in the next couple of years?

Sam:

I think unsupervised learning and the ability to look at huge amounts of data and understand the underlying concepts. That’s just gonna really surprise us on a two year time frame and it’s gonna do amazing things.

Shawn:

Awesome. Well, thank you very much.

Sam:

Thank you. Appreciate it.

Author

  • Sam Altman

    Sam Altman is an American entrepreneur, investor, and programmer. He was the co-founder of Loopt and is the current CEO of OpenAI. He was the president of Y Combinator and was briefly the CEO of Reddit.

Similar Posts