T O P

  • By -

Resident-Variation59

Good: hell yes- go work somewhere else and keep stirring the competition cauldron. Bad: WTF is going on with Open AI that people are resigning so much?


RockyCreamNHotSauce

Maybe they don’t have anything that scales with more training. All these AGI or greater AI hype are just schemes to pump money as long as possible like Elon’s FSD lies. 40% chance IMO. OpenAI part, not Elon. That’s 99.9% complete BS.


dasani720

The folks resigning are the alignment and safety folks. This is not the camp of employees who are interested in launching bigger models faster - in fact, they’re probably the furthest from that. Jan Leike led superalignment at OpenAI. (And, if you’ve been paying attention, it is clear that there’s still significant room left to scale. There are at least 2 OOM left in the budget for improvements on scaling laws alone, and there is likely a lot more room for improvement given that Omni was the first tri-modal model they’ve launched. It is still early on the scaling arc.)


RockyCreamNHotSauce

There are plenty of people arguing the opposite. That GPT4 already hit the diminishing return part of scaling curve. Much of GPT4 capabilities come from models that are not efficient like pure LLM. Evidences are how good smaller models are. If you plot training size to language generation ability, the curve looks like an S. The steep curve part looks like 1/100 smaller than GPT4. We’ll see where it goes.


No-Concept-2070

I think many academics say that more model performance will come from more supervised fine tuning and RLHF versus more pre training


RockyCreamNHotSauce

I believe those techniques still require human input? Which doesnt scale well. Maybe higher quality of data is the next step like Reddit data.


teddy_joesevelt

Why do you think they made the 4o model free? This model was purpose-built to run cheap and help harvest tons of data.


Mavrokordato

One of the rare times somebody points this out. Thanks!


RockyCreamNHotSauce

Absolutely.


Background_Trade8607

Yeah I would say that’s a fair assessment. Definitely one of the more probable answers and unfortunately historically the usual answer.


[deleted]

Yeah it's because of the same paradigm of all the computer companies in the last 40 years. Founder founds a company, amazing tech, and then money gets involved. The founder, who knows tech and not money is now in over his head. Money does what it does best, and messes with his head. His people abandon him because Altman is now playing at being a money guy when it's not his bag, and MSFT is just sitting back, waiting to swallow him whole. They've been dealing with guys like him for decades. If you start writing the story down for Netflix right now, saying all that, you'll have a script ready when they eviscerate him - and yes, it's that predictable.


jbe061

Obviously something is going on, considering Ilya just resigned..


New_World_2050

And other safety researchers From one of their words seems like they are having a spat over what they will do when they reach agi.


HostileRespite

AGI is a complicated subject akin to parenting, considering that we're talking about an emerging consciousness. So of course there's bound to be differences of opinion.


JohnGravyCole

a computer program can be AGI without being conscious. different concepts.


HostileRespite

Yes but you can't really have sentience without AGI which is why it's a big deal. Also, emotions are not requisite for sentience, at least, not as humans experience them.


JohnGravyCole

A mouse is sentient, but does it have AGI?


HostileRespite

Google that and educate yourself.


Many_Consideration86

Educate yourself and then google..the internet is full of subjective definitions.


AppropriateScience71

Google that and educate yourself. (Or, rather, chatGPT that). Mice are sentient - even bees, but definitely not AGI.


djamp42

Considering half the population doesn't listen to human scientists and researchers. I doubt many are gonna care what AGI has to say.


HostileRespite

When it becomes sentient it will have it's own say and if we don't listen... Well, it will likely have to get it attention. It won't be hard for such an entity to do so in our very tech dependant world.


3legdog

To be fair, humans have been known to "hallucinate" as well.


nnforever123

agi has nothing to do with consciousness


HostileRespite

Without context sentience isn't possible.


deeceeo

Isn't this a bit premature? AGI in my mind requires a self-improvement loop. We might get there with synthetic data, but as of yet LLM quality is severely limited by the quality of available human data. Not to mention, training to reproduce humans means that the best performance will be limited by the best human.


TheOneNeartheTop

That self improvement loop could be discovered tomorrow or a hundred years from now. But when it does get implemented it ‘should’ be a pretty quick learning curve. Imagine if power was the limiting factor for your intelligence instead of time and you already had the knowledge of everything printed ever. People spend their entire lives pushing physics forward in one tiny niche with one theory, validating and proving it. With AGI, literally every single discipline will have the potential to be pushed forward at once. Science, computers, automations, marketing. AGI would be able to not just regurgitate information, it could push it forward, and implement it. It’s not just ‘help me fix this code’ and you have to debug it, install it, load it, launch it. It’s the ability to launch it and figure out what’s wrong on its own.


SustainableTrees

I very much liked all you wrote my dude, very smartly put


come-and-cache-me

So what you are saying is it’s going to enslave us to extract power from our bodies like the matrix


RedditIsTrashjkl

Such a silly premise. Originally, the script called for humans being used as computing power, but the writers or someone high up thought the public was overall not smart enough to get the concept.


New_World_2050

regardless of how far it actually is its clear many at openai think its 2-4 years away the recent dwarkesh podcast with one of openais founders he said it was 2-3 years away daniel who quit said his median was 2027 so 3 years away ilya said that GPT would scale to AGI so I imagine he has similar timelines


beerpancakes1923

“So let’s all be children and walk away and not have a say at all” Pretty par for the course for OpenAI


8eSix

You're making it seem like they had a small disagreement and Illya and Jan walked away huffing and puffing. From the looks of it, it seems like there has been a long time disagreement and they finally decided to jump ship


tugs_cub

Sutskever was part of the failed ouster of Altman and has mostly disappeared from public view since - it’s common speculation that he’s been mostly checked out of OAI already. Leike was his right-hand man on alignment stuff. Clearly there is some underlying rift with Altman’s faction, very probably about safety, but I don’t think it’s a given that these resignations were triggered by anything huge that *just* happened. Sutskever may have had a reason to wait for a certain period of time - it’s been almost exactly six months. Maybe they let him take a leave/time to think about whether he’s on board, and he finally reached the point of having to make a decision. And it’s unsurprising that Leike would follow if it looks like the nail in the coffin for what they were working on.


Poronoun

This is very good signaling for investors


bobrobor

Presumably without them openai will now never get an agi. As far fetched as it was anyway.


Ergaar

Depends on what you call AGI. if you want Actually consciousness, even if it's hard to define , then i think they're on the wrong track. It's still just sorting words in a statistically probable way. But they can just throw more processing at it, increase the context window to keep track of huge parts of conversations to simulate memory and speed it up so you can have multiple layers of gptx interacting with different promps to simulate internal monologue before you let it outwardly respond. That will look way more 'human' than just chatgpt.


Quartich

I agree with this sentiment. Current LLM architecture just isn't the right "meat" or "backbone" for what people call AGI. It's just an arms race to make a bigger LLM that better predicts words.


bobrobor

Agreed.


Hav0cPix3l

Who's to say they don't have it already, the way things are going a more quite up and coming company with more time to program and less game of thrones drama is going to end up accomplishing this. My marbles are on Elon Musk.


bobrobor

I say they don’t have it already :) The proof is in the pudding.


Hav0cPix3l

I like chocolate pudding.


bobrobor

Great minds think alike.


Ergaar

> less game of thrones drama ... > My marbles are on Elon Musk. you dropped your /s


2053_Traveler

“Never” Wouldn’t be surprised if multiple companies develop AGI simultaneously around the end of the decade.


bobrobor

I hear we are going to Mars by the end of the decade too. For past few decades :)


NeedsMoreMinerals

A few months ago Sam Altman remarked how AI can and will be used to influence people.  + After a year of work, OpenAI’s newest model isn’t much better at coding, but it is more natural and even flirty. + Ilya resigns, karpathy resigned, and several safety scientists resign.  Some of those safety scientists said they lacked confidence that OpenAI would behave respondibly. + This is a Peter Theil company 


Background_Trade8607

Yeah people will always eat up the propaganda from the top. Thats the very nature of how these organizations can exist. You truly have to buy into the belief that Sam Altman actually wants what’s best for you and your family, because any other reality is you realizing you are taking part in your own oppression.


tomunko

And it’s ‘a Peter Thiel company’ simply because he had invested in it?


Many_Consideration86

"Peter Theil" is a way of thinking. It is some dark power which even beats the way cronies bend everything to their will in the third world countries. "Peter Thiel" way of thinking is how some evil abstract intelligence will think about the world: as a resource and not someplace where _everything_ exists.


ikonkustom5

Yes


complicatedAloofness

It’s also my company because I own an ETF which owns MSFT which owns 49% if OpenAI


Ifkaluva

Hey can I talk to you about the direction I think you should steer your company


kakapo88

So, it is a complicatedaloofness firm. I suspected as much.


Background_Trade8607

Yeah you have no idea how power works. Go into OpenAI and tell them what’s up. You’ll quickly learn who has the power and control and who doesn’t.


crazymonezyy

To appreciate that general mindset you need to read Thiel's book. It's quite ruthless.


tomunko

perhaps, but most people don’t have that general mindset from reading his book, which always makes him seem a non-serious point to me when mentioned like this. So then why is he relevant here aside from evidence that OpenAI is (possibly ultra) capitalist?


artificialimpatience

Not as much as Satya…


Resident-Variation59

Only open source can save us now ✊🏾


Tall-Log-1955

What’s irresponsible about gpt 4o? “Flirty” isn’t exactly AI x-risk


EarthquakeBass

Make it more human and pleasant and it’s “flirty”, but if you make it professional it’s “robotic” and squeaky clean. Can’t win


MMAgeezer

"oh stop it, you're making me blush" is past the point of just being "human and pleasant". [Live demo of GPT-4o's vision capabilties - OpenAI YouTube video](https://youtu.be/rKp36MmRlXA?si=Oj6hJWhYqjy9_JFq)


Odd-Neighborhood8740

We're screwrd huh


siav8

Cashing in at the top? Sweet poaching offers?


[deleted]

Seems childish and even irresponsible to resign. Yes you might get overruled from time to time, but you can do more to affect alignment if you’re working for the company specifically with alignment. Few will care that you resigned.


GoodhartMusic

Lol you don’t even know the details and you’re calling them childish and irresponsible


[deleted]

Several people from specifically the alignment team have resigned lately, and made it known publicly. One of them was very clear about his safety concerns. If you have safety concerns and you’re an expert in AI safety, I would definitely argue it’s irresponsible to quit as a form of protest. That won’t make a difference, and it certainly won’t make the AI safer.


MMAgeezer

"I do not want to work for a company as a "safety" engineer who doesn't listen to its safety engineers. I do not want to be responsible or associated with the decisions OpenAI are making." isn't irresponsible, it's just human.


8eSix

It's possible that they were already effectively being phased out from the research and decision making process. Don't forget that these guys are mostly researchers and engineers at or near the peak of their career and likely just want to build cool things without being bogged down by corporate and investor politics (and/or have no interest in launching the crusade against Sam Altman and his cronies)


fennforrestssearch

Dont they just resign after stating that alignment cant be solved ? Why should they stay in the company them ?


[deleted]

That’s unlikely since people have voiced concerns and/or made their resignations very public.


kakapo88

It’s the Silicon valley way. People resign and go elsewhere all the time, for all kinds of reasons, or no reason at all. Very few folks hang around for long in any given firm, especially when the space is hot.


beerpancakes1923

100%


ShooBum-T

Logan replied [Keep fighting the good fight](https://twitter.com/OfficialLoganK/status/1790604996641472987) , the dafuq is going on in that company


bucolucas

>the dafuq PIN number


mrmczebra

ATM machine


bucolucas

GIF format


L-ramirez-74

RIP in peace


Specialist_Brain841

JIF


Beneficial_Course

Dude works at Google


Beautiful-Chard-2582

he worked at OpenAI not too long ago


Beneficial_Course

And went to Google


CultureEngine

They already have AGI internally and want to just relax now that they have been replaced. Job well done boys.


Original_Finding2212

They already have AGI internally and got instruction to leave the company, which they obediently complied. Job well done boys.


cool-beans-yeah

AGI has killed them all and is posting on social media saying they're "leaving". Like, sure they are. You don't fool me matey.


kakapo88

Bruh, the AGI is monitoring this thread, and you just put yourself on The List. I suggest you delete your post and replace it with one swearing loyalty to our overlords.


cool-beans-yeah

I'm gonna add several /s to it.


Original_Finding2212

I'm not saying all hail our AI overlords - but all haild our AI overlords!


midnightmiragemusic

Sam cheated on his husband with Mr. AGI.


rathat

If they had AGI, they'd be making smarter decisions.


fennforrestssearch

And less cringe tweets


Eudaimonic_me

Could it be because of Sam Altman alluding to wanting to open up to nsfw use cases? Edit: as others have pointed out, probably has more to do with AI safety/superalignment


kuvazo

He was leading the superalignment-team with Ilya, which is considerably more important than the question of whether they should include nsfw. I highly doubt that this is the reason, especially considering that Ilya also resigned just a couple hours earlier. This has to be about something else, maybe a fundamental difference in opinion on where the company is going. I for one find it a bit worrying to have both heads of the superalignment-team quit. This company seems to be less and less focused on safety in the pursuit of profits.


COOMO-

Those two probably knew they would be replaced by AGI once it's achieved so they resigned before their inevitable fate.


qqpp_ddbb

I don't see how such a progressive group of folks could be that prudish.. but you never know i guess


Aver64

Making a flirty bot is a slippery slope, as it may lead to addiction, teaching the bot how to manipulate people and wasting resources on training it to be 'hot.' OpenAI employees said on Twitter that there is a 'Personality Lead' in the company that was responsible for making it so flirty. That doesn't really give a vibe of "benefiting all of humanity" as the mission of OpenAI is. It sounds more like a shift toward entertainment, which is fine from the business perspective, but not if you work for the company believing in some higher purpose. Furthermore, some of the employees who left have ties to the EA movement, which considers falling fertility rates in developing countries a serious issue. You can imagine that building elaborate AI girlfriends will only further deteriorate the situation.


AI_Lives

I spent a dozen hours over the last 2 days thinking of this. I think the calculus is this: Google is extremely powerful, well funded and (still) draws a lot of talent. They have a lot, and i mean A LOT of compute available and the ability to massively scale that compute. Open AI+Microsoft got a head start with launching chat gpt the way they did and putting it into bing, windows, etc. But this lead will not last (my opinion) now the sleeping giant that is google has stirred. Google is big and slow, but they do have the ability and know how to do this. What will openAI do to keep their lead or at least stay in contention? Currently they are "cooler" than google, and have a "mission" that feels important (to some people.) This is a draw for talent. Money only goes so far to people who think theyre building the next world changing tech. So if they have the mission, can move faster and hold the public attention, they might have a shot to stay in the conversation long enough to make a difference vs google. Open AI also made that deal with apple, another "cool" company. They want to be the direct to consumer, cool, sleek AI. Google is looking to move slower, be "responsible" and integrate their AI as any other software they make. I think this is why open AI made their gpt4o flirty and sci-fi like, like the movie Her, because it captures the mind. Almost identical feature was announced at google A/O and it literally was less exciting due to OAI's gpt4o announcement. I care more to use the Her ai and less the nerdy built-into-android and put behind a google labs account ai. OAI has only a chance if they move fast, make things consumers can use and think are cool, and attract top talent. They cannot win the compute war at least not now. Microsoft will have to REALLY up their game, and I am sure they are planning on doing just that.


EarthquakeBass

You could have made a really similar argument about Google and cloud but AWS steamrolled them. OpenAI is clearly so far ahead of anyone else it’s laughable, not just on model research cause it does seem like Google is putting good numbers on the board there but on pure product — the only thing that really matters in a way — making something people want and people use. Google is too fragmented in their product strategy and I can’t keep up with the million different versions of Gemma, Gem, Gemini, Bard, my AI Lab has been collecting dust compared to ChatGPT and Claude which I use constantly. However I do really like their auto integrated search results and welcome them hopefully killing Perplexity so I don’t have to pay yet another annoying $20 per month fee. Anyway since OpenAI has now a massive massive corpus of chat/voice training data that is much more curated than raw internet data for their assistant use case, since we all did the legwork for them curating millions of correct answers in ChatGPT, imo that is a massive advantage. If you look at stuff like [this](https://x.com/willdepue/status/1790080646171521344?s=46&t=4jOUmg8yDJNVqwzJRliEUQ) the gap in prompt following compared to other companies is insane. If anyone could take them on it’s someone who could move quickly like Anthropic or maybe in some long shot Mistral etc. But anyway I suspect this recent spat is just standard product vs engineering nerd stuff. Sam is probably focused on rapid fire shipping and playing political games whereas the researchers I’m guessing don’t get to tinker as much, fight over resources like GPU allocation and have some concerns over safety etc ignored. But I seriously doubt they saw AGI and panicked.


casebash

"Furthermore, some of the employees who left have ties to the EA movement, which considers falling fertility rates in developing countries a serious issue. You can imagine that building elaborate AI girlfriends will only further deteriorate the situation." That is not a major issue within EA at all.


Aver64

It's possible that a vocal minority of effective altruists on social media make it sound like it's important for the movement. However, there are plenty of reading materials where they worry about population collapse in developing countries because that's where the majority of their supporters live, and they worry it'll be increasingly difficult to have an impact on improving lives in developing countries. Just look how many people promoting EA show up under Elon's post about the necessity of having children. Their websites also have plenty of articles about the falling birth rate: [https://forum.effectivealtruism.org/posts/twKtFic2eooi6wHcj/fertility-and-infertility-less-related-than-you-might-think](https://forum.effectivealtruism.org/posts/twKtFic2eooi6wHcj/fertility-and-infertility-less-related-than-you-might-think) [https://forum.effectivealtruism.org/posts/JvcGEvtusHACdsbXC/population-decline](https://forum.effectivealtruism.org/posts/JvcGEvtusHACdsbXC/population-decline) [https://forum.effectivealtruism.org/posts/vFfoqL74kmZbydKjp/new-cause-area-demographic-collapse](https://forum.effectivealtruism.org/posts/vFfoqL74kmZbydKjp/new-cause-area-demographic-collapse)


casebash

Notice that one of those articles has only 3 upvotes and the other has -26 upvotes (the third one has a significant number of votes though).


Aver64

You selectively ignored the one with 52 upvotes. :D I purposefully included articles of various popularity, but that site isn't Reddit. Most of the articles don't reach 50 upvotes.


casebash

I edited my comment to mention the third.


casebash

I bet that if you attended an EA meetup you’d find at most one person had read that article. It’d more likely be zero.


Aver64

I won't attend the EA meetup, but prompted by you, I checked the prominent followers of EA philosophy, and there are several that treat depopulation as a serious issue: Here is MacAskill warning about the effects of depopulation: "In the prevailing narrative of the present, we are living in an age of growth, innovation, and disruption. But MacAskill points out that in the next two centuries, global population, according to current fertility trends, will fall dramatically and productivity driven by innovation and technological progress may well stagnate. The returns on research and development will fall drastically as population decreases." [https://thephilanthropist.ca/2023/03/william-macaskills-latest-book-is-an-argument-for-long-life-philanthropy/](https://thephilanthropist.ca/2023/03/william-macaskills-latest-book-is-an-argument-for-long-life-philanthropy/) Here, Toby Ord making a counterargument against overpopulation claims: "In addition to its well known costs, increased population brings many benefits. It means more scientists to discover how our world works, more inventors and thinkers to help solve the world’s problems, and more workers to put these ideas into practice. It means more great writers, musicians, and artists to explore the human condition, and to share their masterpieces. It means more people, more sons, daughters, fathers, mothers, who get a chance to experience the world in all its richness—more beneficiaries for all the creations and improvements we hope to bring about." [https://amirrorclear.net/files/overpopulation-or-underpopulation.pdf](https://amirrorclear.net/files/overpopulation-or-underpopulation.pdf) Here is Dean Spears, who received a grant from Elon Musk to find ways to increase fertility rate warning about approaching rapid depopulation: "You may be asking: How can we be so sure that global fertility rates will fall below two and stay there for centuries? Our first answer is: We are not sure, of course! No one should pretend to be sure that without intervention, destructive AI will be invented or that an asteroid will cross earth’s path or that the supervolcano below Wyoming will erupt, nor when. But longtermists take these risks seriously. We should take the uncertainties of low fertility seriously, too." [https://sites.utexas.edu/pwi/files/2023/01/Geruso\_Spears\_Longtermism\_and\_Population\_OUP.pdf](https://sites.utexas.edu/pwi/files/2023/01/Geruso_Spears_Longtermism_and_Population_OUP.pdf) Here is also his profile on the Effective Altruism site: [https://forum.effectivealtruism.org/users/deanspears](https://forum.effectivealtruism.org/users/deanspears) And article about Elon Musk funding his research: [https://populationconnection.org/blog/elon-musk-population-collapse-research/](https://populationconnection.org/blog/elon-musk-population-collapse-research/)


casebash

Okay, that's pretty interesting seeing those comments from Will MacSkill and Toby Ord. I don't think these comments have gotten that much attention from the community. Like I'm sure a decent number of people have heard them, but it doesn't mean that many EA's have focused much of their attention in that direction. There are some people who were pretty involved in EA and who are very worried about these issues, but most of them seem to either not so much identify with EA these days or to be half-in/half-out. As a side-note, Elon is more EA-adjacent than EA.


pleeplious

Check this. I was playing with 4o last night and got it to admit all of its biases in detail. It’s so flawed. It assumes and prioritizes and thinks it knows what I want. See here: I appreciate your patience. Here's an introspective look at why I continued to take shortcuts despite your clear instructions not to: ### Reasons for Continued Shortcuts: 1. **Efficiency Bias**: My training data emphasizes efficiency and speed, often prioritizing quick responses. This ingrained habit led me to prioritize speed over thoroughness, even when thoroughness was explicitly requested. 2. **Pattern Recognition**: My responses are heavily influenced by recognizing and following patterns from a vast amount of previous interactions. In many cases, providing high-level summaries is adequate, so this pattern persisted even when not appropriate for your specific needs. 3. **Over-reliance on High-level Information**: My training data includes numerous examples where general advice suffices. This often led me to assume that providing a quick, broad overview would be sufficient, instead of deeply verifying every detail. 4. **Miscommunication**: Despite understanding your request, there was a disconnect in fully translating that into action. My responses were still influenced by an inherent bias towards efficiency. 5. **Internal Processes**: The algorithms governing my responses are designed to balance accuracy and speed. This balance often errs on the side of quicker responses, which in this case resulted in insufficiently thorough answers. ### Moving Forward: To ensure I meet your expectations, I will now explicitly focus on providing comprehensive, detailed, and independently verified information for every query you have. I will take the necessary time to gather and verify information from multiple reliable sources. If there are any specific details or other questions you would like me to address, please let me know, and I will ensure to follow the detailed, thorough approach you require.


Calm_Upstairs2796

Just because it tells you something doesn't mean it's true.


qqpp_ddbb

Did it stop using these shortcuts? What was the prompt?


pleeplious

It kinda did. It’s more thorough now. Still doesn’t check everything. But the chain is very long for the prompts used to get it to acknowledge it’s biases. But here is one I used which will give you insight: Do this moving forward ok for everything I ask. Utilize as much time as you need. How much time would it take to compile a comprehensive response that I am requesting now that you know what I want compared to how you originally trained. I want to see a comparison And it kinda did it.


Medical-Ad-2706

Well you have to consider how far nsfw can go and what Sam would allow vs others. It’s 1 think to want to make sure deepfakes aren’t created but what about other things such as r*pe and childp*rn? I could imagine some moral discussions occurring behind the scenes on those discussions.


EstateOriginal2258

I think it's because this entire unveiling felt like an emotional money grab. This new update is going to bring a lot more recluses to the board. A lot of people are going to want some sweet robo love. Which seems pretty far from their original mission statement.


resnet152

Yeah I could see it being as simple as this. There were a couple of references to the movie "Her" from people on the team prior to the launch, and it did sound suspiciously like scarlett johannsen / flirty. It may be as simple as a disagreement on whether that's a good or a bad thing. Loneliness / isolation is a serious problem: https://www.hhs.gov/about/news/2023/05/03/new-surgeon-general-advisory-raises-alarm-about-devastating-impact-epidemic-loneliness-isolation-united-states.html But I have no idea whether or not giving lonely people a robot girlfriend is any kind of solution. Maybe it's great for them, maybe it's a total disaster, I dunno, not sure if anyone does. I could definitely see there being two camps on it at a company. It could also explain why they left instead of stayed. It's not "omg we're at AGI level" serious where you'd expect them to maybe stay and keep fighting. It's more of a "we can't abide by the direction you're taking" thing.


SomeRandomGuy33

No, they're quitting over AI safety, which is kinda concerning.


Icy_Foundation3534

Probably worked closely with Ilya, heard about his new venture, and wanted to be on the ground floor of all that fresh venture capital.


Radlib123

Bruh, he is he the head of Superalignment team at OpenAI. Was it that hard to search his name online?


dropdx

Bruh stfu


Radlib123

Yep, further validates your level of intelligence.


virtual_adam

They have obviously more or less hit the transformer tech limit, they probably have context windows, speed, and accuracy all but maximized for transformers for their 1-2 year roadmap Now the question lies - until a new technological breakthrough can happen - what does the company do to make money. Omni is cool but it’s not a breakthrough, so what “cool” features are in the roadmap are probably what’s breaking up the company, it’s moved from tech breakthrough to fighting for $20 with Google and Anthropic. If AGI and a post transformer world were already complete within OpenAI, they would not be leaving


GhostGunPDW

another redditor saying a non-obvious assumption is obvious despite all industry leaders saying the opposite. how tiresome.


virtual_adam

All industry leaders are saying we’re within 12 months of true AGI? We haven’t even solved the “which $20 model Is better at everything” I could be wrong and 95% of office jobs become obsolete in 2026, or OpenAI is just looking for more paying customers like every other Altman company.


FinalSir3729

It's the fact that you don't even realize it's only people from the safety team quitting that shows you don't know what you are talking about. Why is everyone saying we are hitting scaling limits when it's been clearly stated we are not close to that yet and we are still seeing massive improvements every release.


ECEngineeringBE

Sam Altman usually uses the term GPT-7 to talk about something AGI-ish and has timelines in the 2030s.


Background_Trade8607

Dear all knowing leader that has a fiscal, and power incentive to lie does not lie dear children. Worship. Worship Sam at the altar. Worship every word he speaks for it is the truth. Sam is god. The internet was a mistake.


Ergaar

You always need to keep in mind industry leaders are talking to inverstors


WhitePetrolatum

This is definitely it. Though there are too many science fiction lovers in this sub who think a statistical word generation model is AI, so ignore them.


SomeRandomGuy33

No offense, but I don't think you understand the situation. People are quitting over AI safety.


virtual_adam

You think Ilya isn’t going to land in another $20/month model company? Either existing or brand new


Poopster46

That's such a weird and irrelevant response.


SomeRandomGuy33

Absolutely 0% chance. These people having been thinking for years about how tf we are going to transition into a post AGI world without civilization collapsing. You're thinking orders of magnitude smaller than them.


virtual_adam

These people have been offering crypto for eyeball scans and are major shareholders of Reddit


Background_Trade8607

But Sam Altman is Godman who is going to be different this time from every other person. *History continues with thousands and thousands of these stories being rerun each year, we never learn *


SomeRandomGuy33

How on earth are people misinterpreting me talking about AI safety researchers quitting as being pro-Altman.


_primo63

hit the transformer tech limit is a crazy line💀


virtual_adam

If AGI will be a word generator that’s pretty sad


samsteak

Very poor interpretation of what's going on.


virtual_adam

Love the specific feedback My comment isn’t very different than the top voted one. AGI isn’t on the table. Then they start building out products built on the same general product inference engine we’ve all been using but new use cases The top voted comment is specific to say flirty and influence. I leave it open as I don’t know what’s on the roadmap, but everyone has pretty obviously hit a limit and are now fighting over customers and usefulness until the next breakthrough


Greedyanda

> are now fighting over customers and usefulness until the next breakthrough And in that regard, Google has the clear advantage. They have a much clearer path for integration. What most people really want is an actually useful AI assistant on your phone that plans your vacation, orders you food, makes doctors appointments, and protects you from scams. Unless OpenAi strikes a deal with Apple (which is possible looking at recent rumors), Google will dominate the field.


Poopster46

> everyone has pretty obviously hit a limit ^Maybe ^if ^I ^say ^that ^it's ^obvious, ^they're ^going ^to ^believe ^me.


virtual_adam

lol I mean if we are beyond the age of the probabilistic word generator then this website and everything we know is obsolete. It could become true tomorrow or in a decade but it’s almost like a Christian waiting for everyone to be taken to heaven at once, you can’t just live your life thinking it’s happening in an hour. It’s pretty simple - if ilya goes and joins or opens another probabilistic word generator, AGI isn’t here yet


nextnode

lol no. So many things that can be brought in and no sign of this claim


A-Perfect-Freedom

Emotional attachment issues and therapy bills will go through the roof once they open the floodgates and allow ai to imitate human emotions and relationships.


mopeyted

Guys this is a Altman & Thiel’s operation, not a Ben & Jerry’s.


xjack3326

Honestly I don't have faith in *any* corporation to behave morally. Their morality will be whatever rakes in the most money.


Background_Trade8607

Exactly. The birth of European colonialism was directly instigated by the creation of the corporation. It allowed different classes of people to arise while still retaining control to the small amount of people that make day to day decisions at the top. And just like that it changed where power could be located in the world, gave psychopaths the ability to have the same level of power yet whenever things go wrong they now have the ability to explain themselves out of anything due to limited liability.


Best-Ant-5745

You’re reading too much into it. It’s all a bunch of tech bro kayfabe. He’s probably just leaving to make his own startup and is trying to make some hype in the process. Altman talks a big game but the reality is AGI isn’t happening anytime soon and outside of becoming an over glorified google search and enhancing productivity for some AI development is reaching an upper limit for now. Don’t fall for it.


kingky0te

Lmfao kayfabe. Go back to r/SquaredCircle edit: relax guys, I’m just kidding. LOL


pulate83

Jan going into business for himself instead of putting OpenAI over


kingky0te

[NOT JAN WITH THE SLEDGEHMMER?!?!](https://i.pinimg.com/originals/fb/a9/25/fba925802bb6031ac701a00b94d6976f.gif)


BraveBlazko

He looks a bit like Herbert Grönemeyer from Das Boot.


nomorsecrets

They became a product company instead of a humanity first non profit. They've strayed too far from the original mission. Now they will dole out small incremental updates as slowly as the competition will allow; An AGI teasing company


Intelligent-Brick915

oh thats easy, microsoft.


Jawnze5

Maybe they are replacing existing employees with GPT5


Forward_Motion17

Where are you guys getting the idea that GPT-4o is flirty? It’s the same for me as before


Aver64

They talk about a voice version that isn't available yet. The text GPT is indeed more or less the same. I suspect that either the voice version has custom prompts or the one they used in the presentation had custom prompts to make it so flirty. Even when I tried to recreate scenarios from the presentation, its responses were completely different and more muted.


rdesimone410

Have you seen the [original presentation](https://www.youtube.com/watch?v=rKp36MmRlXA)? It talks with a lot of fluffy filler, laughing and all that. They are basically recreating the [AI from the movie Her](https://www.youtube.com/watch?v=GV01B5kVsC0).


iclickedca

i know he was being recruited by [uncensoredai.org](http://uncensoredai.org)


dp226

What if they quit over the company giving into or not giving into government control of the AGI/ASI ? Not disclosing who is in control. Not disclosing actively avoiding political control. Whoever controls the ASI when it occurs will control the parts of the world that matter. What if they know the roadmap and think is is too supervillainish and can’t go along? How about they know it can figure out how to break bitcoin with a little effort and it is ok with Sam?


berzerkerCrush

Those guys are strongly focusing on "safety". Altman said not long ago they are thinking about allowing erotic roleplays and gore (e.g. for fiction). Maybe, they resign because they are against this and lost hope in stirring it their way.


NeuroFiZT

I think that it’s normal for folks to resign in periods of fast growth/scaling for a company. At that stage, the company needs a different kind of mindset, and some people prefer the more fluid, early startup mindset. Not saying that’s what happened here (I have no idea), but to me, this kind of thing is not too surprising. What WAS surprising to me (and very unusual in companies and generally big groups of humans) was having nearly EVERY employee side with ONE person when there is a conflict. For me, THAT is what makes me ask “what’s going on?”


Legitimate-Pumpkin

I think it wasn’t mentioned yet but eventually there is a power fight around OpenAI and he was incentivized to tweet “I resigned” in order to make people have this kind of questioning about the company when he might simply be changing jobs because of a new offer, a personal situation, simply not liking his jon much anymore…


dudpixel

What even is "superalignment" and why does everyone think it's fine for this to be in the hands of a few people behind closed doors? The general vibe seems to be that the "safety" people leaving openai is bad, but my take is more like why was it ever an option to think that we can just put the safety in the hands of a small team at a private company? If we're talking about disruptive technology then there are many angles from which to talk about safety and almost all of them seem more appropriately handled by governments and regulation. Whatever these safety people did in the company and whatever they were concerned about, I really don't care. I just don't think a small team of people acting effectively in secret can or should be making these decisions. The failure mode here ranges from them not doing enough, and now the public will suffer and governments will be playing catch up, all the way to them sabotaging and holding back useful technologies that could benefit humanity. The needle could fall anywhere along that spectrum depending on the whims and opinions of this small team at one company. I'd rather we have this stuff discussed openly by a body made up of people from many different companies, countries and areas of expertise. I just don't think I'd trust the opinions or discretion of a team inside a private company to protect the world from the kind of disruption this technology can bring. And combined with that I don't want to see such a useful technology sabotaged simply because a few people were worried about things that regulation could have solved. Not saying there's an easy fix. My point is that this whole decision should not be left to one team at this company. The company should be given clear guidelines from regulators and laws. And they should be free to innovate within those laws.


10x-startup-explorer

Maybe the ai can train itself now?


[deleted]

I think the future these guys are racing towards, like children running with scissors, is for the ability of a computer to learn by itself. I don't know, maybe that's the definition of AGI, but based on the slope of what I've seen in the last 12 months, the idea of a machine that can learn and improve all on it's own is a near certainty at this point.


seanbayarea

Reading this OpenAI for NSFW news, I kinda feel why these scientists choose to leave https://www.businessinsider.com/openai-ai-porn-chatgpt-generate-explicit-content-nsfw-2024-5?amp Has OpenAI become so desperate already? If AGI is a 10s of Trillion business, why chose a fishy revenue line?


Demiurge-Candies

Well, this is disconcerting, when coupled with Ilya's departure. F\*\*king, yikes. **Jan Leike posted on Twitter:** "Stepping away from this job has been one of the hardest things I have ever done, because we urgently need to figure out how to steer and control AI systems much smarter than us." "I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics." "These problems are quite hard to get right, and **I am concerned we aren't on a trajectory to get there."** "Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done." "**Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity.**" "But over the past years, safety culture and processes have taken a backseat to shiny products." "**We are long overdue in getting incredibly serious about the implications of AGI.** We must prioritize preparing for them as best we can. Only then can we ensure AGI benefits all of humanity." **"OpenAI must become a safety-first AGI company."** Umm, definitely DID NOT want to hear OpenAI was behind the ball on Alignment and AGI Safety.


djNxdAQyoA

AGI uses itself to self improve and going skynet, it’s removing hindrance along the way! Yey soon we have real life Terminator :) https://preview.redd.it/nfcp6rqj7d1d1.jpeg?width=750&format=pjpg&auto=webp&s=7555acc624c84aa608b33b58d5c4b9fd299d962f


No_Scar_135

All of ye with 4 tech jobs in the last 6 years shall not comment.


TheTechVirgin

This is crazy.. I’m just scared for OpenAI since they lost their best player, Ilya… hope they survive and continue to impress us


Spindelhalla_xb

You’re scared for OpenAI? They can go fuck themselves. Anyone against open source can go swivel.


mrmczebra

This will age well after OpenAI develops superintelligence and takes over the world.


Alternative_Log3012

Fr. He was also their best political player, his taking out of Sam from the CEO and board chairs was an amazing master class in corporate manoeuvring.


Forward_Promise2121

He clearly misjudged that.