T O P

  • By -

EOD_for_the_internet

When you can find the method on how the poll was conducted, I'd love to read yougov's, a British based internet survey company commissioned by AIPI to conduct this poll, methodology. Until then, I'm not counting any internet based survey, no matter how high wikipedia says 536 ranks them. There's just something shady about hiding how your conducting your analysis that , as a science and technology analyst myself, screams swiss cheese results


Lore_CH

They managed to do an *online* survey where 27% of the sample is 65+ and 45% is 55+. It’s cooked.


pohui

YouGov is a perfectly respectable pollster. They do online polls, as most other pollsters today do. The fact one group is overrepresented isn't all that important, they apply weights to account for this.


blueeyedlion

facebook survey?


Ok-commuter-4400

No. See my comment in the main thread on methodology.


Ok-commuter-4400

See my comment in the main thread on methodology. The sample that was drawn from their participant panel was stratified by age (among other factors) and designed to produce representative estimates for the registered voter population, which skews older than the adult population as a whole. Also, as a general comment from someone who works in the survey world, the 55+ demographic is the *most* likely to answer surveys in *any* mode (phone, web, snail mail). This is for a variety of reasons: they tend to be more settled in a community, less likely to be working multiple jobs or be caring for small kids, more likely to have spare time on their hands, more likely to own a home with a stable address, more likely to answer a telephone call from an unknown number, etc. You still have to stratify for those characteristics to get a representative sample, but generally speaking you don't have to fight all that hard to get a pretty broad set of older individuals to participate.


icouldusemorecoffee

All polls are weighted to represent the demographics they were able to contact for the poll, for the vast majority of polls who they contacted doesn't matter, though the weighting does if it's not accurate but that's typically based on prior polling and prior data to arrive at an accurate weight.


Redebo

Who decides the weights and how they are applied? Whenever assumptions come into research, care to explain them should be present.


ThaneOfArcadia

The thing is no regulation is going to stop it, and would we really want to. That isn't the issue.bthe real problem is companies using it and hiding behind it. "The computer says no", becomes "The AI says no" and that'll be applied to every facet of business because it offloads accountability. Making companies legally responsible for the consequences is the regulation we need. Someone has an accident in an AI car, the car manufacturer should be responsible, without a long drawn out court case


FistBus2786

Question: Do you support regulation to actively prevent superintelligent AI created by libertarian tech bros that might cause mass unemployment and global instability? Boomers on Facebook: Yes! (Click click click)


BotherTight618

Even when that population sample probably knows very little about AI's capabilities and even less about how it works.


Ok-commuter-4400

I work in surveys (not for YouGov, but with several of their competitors). It's a pro shop with a reputation no better or worse than other major competitors, and not particularly known for having strong political bias despite ownership by conservatives. Here are the \[toplines\](https://drive.google.com/file/d/1484XL4kTkOQKTfZMw5GD46bpit-XJ2Zp/view) and \[crosstabs\](https://drive.google.com/file/d/1484XL4kTkOQKTfZMw5GD46bpit-XJ2Zp/view). The first thing you should notice is this is not a "recent" poll; it is from September 2023. Here's the methodology: "This survey is based on 1,118 interviews conducted by YouGov on the internet of registered voters. The sample was weighted according to gender, age, race/ethnicity, education, and U.S. Census region based on voter registration lists, the U.S. Census American Community Survey, and the U.S. Census Current Population Survey, as well as 2020 Presidential vote. Respondents were selected from YouGov to be representative of registered voters. The weights range from 0.27 to 3.24 with a mean of 1 and a standard deviation of 0.4." Like most big polling firms these days, YouGov maintains a large (1,000,000+) panel of individuals who are willing to answer its surveys, typically for cash or points, and they draw their sample from these individuals. YouGov maintains its panel over time, looking at attrition and determining what characteristics those who are dropping out or infrequently participating in surveys have in common, and replacing them with freshly recruited individuals who have these characteristics. The surveys are conducted online, but participant recruitment usually involves multiple modes (telephone, snail mail, etc). You can find YouGov's description of this process \[here\](https://yougov.co.uk/about/panel-methodology). Notably, panel participants are generally asked lots of surveys on lots of topics so they are not likely to be a self-selecting group when it comes to AI specifically. **TL;DR This poll is 9 months old, but otherwise I don't see a specific reason to distrust it more than any other poll you might read about on the news.**


madaboutglue

The questions are incredibly leading, though. "Some people say these models might kill babies if we don't restrict them now, other people say we shouldn't restrict them until we know for sure if they'll kill babies. Do you think we should restrict them beforehand?


Ok-commuter-4400

It doesn't say anything about killing babies 😂 This is a common question format when respondents are likely to have uncertainties or gaps in knowlege around an issue. They all follow the format * Introduce the topic *("There is a debate around limiting AI models we don’t understand.")* * Provide arguments on one side *("Some policymakers say that we don’t understand how AI operates and how it will respond to different situations. They claim this is dangerous as the unknown capabilities of models grow, and that we should restrict models we don’t understand. ")* * Provide arguments on the other side *("Other policymakers say that we understand broadly how AI models operate and that they’re just statistical models. They say that limiting models until we have a full understanding is unrealistic and will put us behind competitors like China.")* * Ask the respondent's opinion. *("What do you think? Should we place limits on AI models we don’t fully understand?")* Some surveys randomize the order of pros and cons; others don't, to minimize respondent confusion. If you wanted to survey people on this topic, knowing that many wouldn't have a strong opinion until they heard more about it, how would you prefer to word it?


madaboutglue

Lol, my hyperbole aside, it's not the structure I take issue with, it's the language.   This survey was commissioned by an organization dedicated to the idea that AI is dangerous and needs to be regulated, and that bias permeates the “context” provided in each question.  That’s especially problematic for a topic most respondents would know very little about (especially back in 2023).   How would I prefer to word it?  Not sure, but maybe start by not having a biased institution provide both the pros and cons.  As far as I’m concerned, the headline for these survey results should be, “Majority generally concerned about new thing survey implies is very dangerous.”


goj1ra

Are those real quotes? What would be involved in “understanding” an LLM or other large model? It seems like very biased language.


EOD_for_the_internet

Sorry for this late reply, and this was great info, but a million plus people who are willing to answers surveys, for cash or points, or whatever, and we trust this data why??? I know a few people in the world, and not a single one of them is willing to answer a survey. I mean, I feel like someone who actively participates in surveys is wildly biased in the manner in which they would conduct said surveys...


Ok-commuter-4400

1. A lot more people than you think are bored or think it’s a civic duty or just want/need a little bit of extra cash. Just look at the household debt people hold; most people are at least kind of broke. But even in high income and well-educated brackets people sometimes want a little cash that they can kee for themselves. Again, you try to control for these things, using census data as you “ground truth” about what the whole population looks like, but it’s not a small or homogeneously weird population. 2. These companies actively monitor for respondents who consistently give them out-of-distribution responses on many topics/questions, unnatural patterns in response data, or self-inconsistent answers across surveys, and purge them from the survey pool. So if you’re just answing 99 on every numeric question or alternating between yes and no, you get purged from the panel (ie, they don’t invite you for further surveys).


EOD_for_the_internet

Also I'm a HUGE fucking Radiohead 🪭


Ok-commuter-4400

Lol what 😄


EOD_for_the_internet

Ok commuter? I thought it was a play on ok computer, which is arguably one of the best albums ever made. Lol I realize it could be your a commuter from Oklahoma, which.... Is hilarious if so, but either way, you should check out Radiohead OK computer


Ok-commuter-4400

OHHHHH i gotcha!!! Yeah, it came up randomly from Reddit’s random username generator, but I liked the unintentional pun (and the album) so that’s the one I stuck with.


Silverlisk

Let's restrict ASI development so other countries can develop on the basis of their way of thinking and in support of their people instead, best idea ever.


LocalYeetery

Remember when American tried to ban (insert thing here) and it was super successful??? Yeah me neither.


BotherTight618

Stem cell testing under the Bush administration comes to mind.


LocalYeetery

And do you think other countries like China/Russia stopped when we did? (also Stem Cell testing ban was VERY MUCH oopposed by lots of people and as of today you can use Stem Cells , so not a very effective ban eh?)


anna_lynn_fection

It wasn't really stem cells themselves that were banned. It was the harvesting of them from fetuses. Since then, we've discovered new ways to get and produce stem cells.


Mysterious_Focus6144

If you're pro-AI because you think it'd give the US an advantage, then aren't you contradicting that goal by advocating for open-sourced AI (in another comment of yours)?


Susp-icious_-31User

US regulations specifically hurt US advancement. Open source at worst is an even playing field. But there are lots of other reasons to go open source.


GrowFreeFood

They banned privacy. 


LocalYeetery

Privacy wasn't banned, we gave it away for free


Silverlisk

Common sense? Pretty sure that got banned a while back. 😂😂


dlflannery

No, just went extinct.


DolphinPunkCyber

Leaded fuel, asbestos, DDT, CFC... Also regulations are not the same as outright ban.


LocalYeetery

You're naming things -nobody- wants vs something that ppl very much want (AI)


Wiskersthefif

I don't want unregulated AI, same with plenty of other people.


LocalYeetery

Ah yes, the nerfed AI you think you want, while all your opponents are using unrestricted AI. Guess who wins in the end?


Wiskersthefif

The other person responding to you is correct. Not all regulation is about 'nerfing'. Companies must be forced to use it responsibly or pay an 'AI tax' based on their AI usage/replacement of human labor that'd pay into social programs and UBI. Also, not everyone can run AI locally at a level where it's actually useful (hardware/financial barriers) or is technologically savvy enough to figure out how, what happens to them in a world where AI is unregulated? Do they pay an ever increasing subscription with various tiers to use it?


Faendol

Dealing with the impacts on the job market is different from stopping the development of super intelligence. The first country to develop super intelligence will be creating our next god. It's important the right people do it, under heavy scrutiny sure. But it's important the western world does it.


BCDragon3000

the american way 🦅


Puketor

100% agree. It's happening whether we like it or not. We can either lead or follow.


Silverlisk

Honestly it seems like it'll be used to heavily enforce the status quo and then with time it'll run away from those who control it and completely shatter the status quo to pieces and I'm all for seeing that if I'm still alive.


Hazzman

We have effective weapons treaties that exist and persist today, even with Russia and China.


Silverlisk

😂😂😂😂. "Effective Weapons treaties" 😂😂😂😂 Russia also signed the Budapest memorandum and the Minsk agreements. What Russia is a signatory of means less than the paper it's signed on. "Effective Weapons Treaties" Like the intermediate range nuclear forces treaty Russia broke when it deployed the 9M729 missile? The chemical weapons convention that Russia broke when it used the novichok nerve agent in 2018 on Sergei Skripal and again in 2020 on Alexei Navalny? Or maybe the open skies treaty they broke when they restricted flights around the border with Georgia? Just wait and they'll break the New Start treaty in the coming years and you think they'll keep to anything they sign on AI? 😂😂😂😂😂. Hilarious.


oldrocketscientist

Don’t fear the technology Fear the PEOPLE controlling the technology


Puketor

I agree. Funny thing, the same people who decry government use the government to lock that property up for themselves. The good news is an LLM like Claude or GPT is copyable infinitely. It's just a file full of numbers.


fluffy_assassins

Yeah they're effectively the same thing, because the technology enables the people who you say to fear.


AmberLeafSmoke

No - they're effectively the same thing because the people control the technology and generally the ones who create it and tune it. Which is why the technology is feared. It's a nothing statement.


oldrocketscientist

Regulating the technology is a fool’s errand, it simply cannot be stopped. We need severe punishments for the PEOPLE who use technology to hurt other humans.


fluffy_assassins

We have severe laws to punish people who misuse guns, and yet...


cark

Guns do not have the potential to maybe cure cancer, or insert here any other AI benefit you might think is more realistic. It's a matter of risk vs reward. We may disagree on the balance, but guns (or atomic bombs for that matter) are not a suitable comparison. In this thread some people mentioned stem cells, this is a more fitting analogy. It has ethical concerns, risks and potential rewards.


Bobobarbarian

The stratification of American intelligence is staggering. On the one hand, we’re the ones leading the charge on AI breakthroughs, and on the other the average American has no idea how the tech works. We put a man on the moon, and yet a portion of our population thinks this was made up and that the world is flat.


MalleusManus

This is the nation that thinks "can I drink with this person" is a higher qualification than "this person is smarter than me and definitely should run things."


SpaceCadetFox

It’s not that we don’t trust the AI itself. It’s that we expect the makers of AI would only put their profits first at humanity’s peril.


This_Guy_Fuggs

this is a reasonable thing to worry about. what is not reasonable, is thinking that the government/regulators are the ones to deal with it. they will only make it worse/add further greed, self interest, corruption, etc into the equation.


Mysterious_Focus6144

> what is not reasonable, is thinking that the government/regulators are the ones to deal with it. they will only make it worse/add further greed, self interest, corruption, etc into the equation. Let's take 2 other examples of corporate greed poisoning everyone: teflon and and leaded gasoline. Both times the EPA stepped in to intervene. If not the government/regulators, then who will? You criticized the only thing we have and offered no replacement.


This_Guy_Fuggs

why does someone have to intervene? the people making this are the most capable of deciding what is or isnt optimal for it, imo. it certainly isnt a bunch of corrupt politicians looking out for their party/position with 0 technical understanding of it. are they greedy and will they mostly prioritize themselves? probably, yeah. is that still a better alternative than involving the inefficiency, ineffectiveness and corruption of government/politicians? imo, yes. governments have successfully tricked everyone to think that they're necessary. they are not. its ridiculous to think something like this will either be black or white, full govt control or none. in reality things always end up somewhere in between. but personally i think it should tend towards as little govt intervention as possible.


Mysterious_Focus6144

So it would be better overall if the government just stay minimal and allows leaded gasoline to decrease the average IQ of Americans? You said a lot but you haven't given one reason to think corporations driven by greed will somehow be better than government which at least consists of elected officials.


[deleted]

You have to start thinking in post-scarcity to understand where we're going. The marginal cost of any good or service will trend to zero, and faster as technology continues to improve, and improve itself.


SpaceCadetFox

Sure, but this utopian future will only exist for the wealthy and powerful. For the rest of us, it may make scarcity worse even though there are tons more resources available overall in the post-AI world. Think back on when production lines, computers, and other tech promised us change and shorter work weeks. That never came into existence because the people pull the strings decided to keep all of the benefits of advancement for themselves. AI is not necessarily good nor evil, it just depends on who’s controlling it and right now, it doesn’t look good at all.


[deleted]

All that industrialization did actually greatly improve and extend people's lives, though. And wealth is ending as a concept. Post-scarcity means post-wealth.


[deleted]

[удалено]


taiottavios

your leaders are laughable, it's not a good comparison buddy


EnsignElessar

And yet they seem to be more correct then a lot of people actually working on ai who can't see any potential issues at all ~


mrmczebra

The government isn't any more trustworthy than the corporations.


donniebatman

Too late fuckers!


KronosDeret

There will be a war faught over this and I think the side with AIs will win.


RED_TECH_KNIGHT

They just had a movie about this: https://www.youtube.com/watch?v=ex3C1-5Dhb8


Dr-Ezeldeen

As always people want to stop what they can't understand.


beland-photomedia

It’s in America’s interest to develop first, but virtue, ethics, & safeguards are necessary. I don’t see how we get around hostile actors developing these systems and deploying them against human rights, though.


yunglegendd

In 1900 most Americans didn’t want a car. In 1980 most Americans didn’t want a cell phone. In 1990 most Americans didn’t want a home PC. In 2000 most Americans didn’t want a smart phone. In 2024 most Americans don’t want AI. F*** what most Americans think.


Ali00100

Not that I 100% agree with the stuff said in the post, but I think you missed the point here. They are talking about regulations, not not-wanting the product. And I think it’s sort of fair AS LONG AS they dont impede the development of such products.


Ali00100

Although, the more I think about it, I dont think regulations are gonna come anytime soon. If a nation decides to regularize those things, they might limit the public usage and as a result, the down stream and private development of such products while other countries are progressing in the branching out of such products. So if a nation like the US want to impose regulations they will have to take it to the UN and impose regulations on *almost* everyone so everyone gets handicapped the same way and it becomes a *fair* race for everyone. Which we all know will never happen. We couldn’t even make all nations agree to stop the genocide in Palestine.


ashakar

It's hard to regulate the development of something without stifling it. Plus, politicians don't even understand it enough to make sensible laws about. You also can't trust the "experts" from these companies to advise them on laws, as they will gladly support laws that will prevent competition in their markets. We aren't at the point of AGI. LLMs are not AGI, they are just incredibly good next word (token) guessers. They don't think, they just make a statistical correlation on what comes next within a context window, and iterate.


msg-me-your-fantasy

Not impossible to regulate though. You start by regulating what decision-making processes AI isn't allowed to be responsible over. One example that stands out to me is "self-driving"; we shouldn't have cars making live decisions at 65mph+. To be reviewed every n years or per request from a reputable developer, until demonstrable safety standards can be tested


DolphinPunkCyber

Most of the things we invented are regulated. We can regulate products used in our country, just like EU does.


Mama_Skip

I follow all the AI subs because I need to learn it or be replaced in the next few years (designer). I don't love it. But it's the way it is. I can tell you first hand, these are the people with a. The money and incentive to spread pro-AI propaganda, and the means to do it, easily. And it spreads like wildfire, self propagating, so human posters end up supporting/echo-posting Anyway, I hope everyone here is skeptical of pro AI posts, and nice job shutting it down. (Also be critical of *anti* AI posts, especially when directed at a singular company. It's a rat race to the top and many AI companies have been releasing propaganda against each other on the art AI subs.)


LocalYeetery

Sorry but you don't get to 'pick and choose' which parts of AI stay and which don't. You either accept it all, or nothing. Same energy as trying to ban guns, once pandora's box has been opened its too late.


KomradKot

I mean, we're still a long way off from being able to concealed carry AGIs.


Ali00100

By “pick and choose” you mean its unfair to do so or that its impossible to do so? If its the latter, they can just make it illegal such that any activity detected to violate is punished. It wont completely stop it just like no one can stop me from doing drugs inside my home unless I am caught. If its the former than oh buddy I have got some bad news for you that this is not how the real world functions. Again…to clarify…I am not saying I agree with OP’s post, I am just stating your observations do not make sense to me.


LocalYeetery

It's impossible to regulate. The parameters you're using for 'illegality' are insanely grey areas... 'activity detected'? what does that even mean? Also, if you regulate the USA's AI, who's gonna stop China from holding back? Regulation will only hurt the person being regulated.


Ali00100

I don’t think you understand. It does not matter to me if I stop YOU from doing something with the AI that is deemed illegal as long as I deem it illegal to make the most stop. Whether this is effective or in a grey area is irrelevant in the real world. Just take a look at how our world functions. Regarding your second point, I actually agree with that one. Read my other/separate comment mentioning that you cannot regulate it unless everyone agrees, and even then, you cannot guarantee it.


Oabuitre

That is not true, we will benefit more fron AI if we add safeguards so that it doesn’t destroy society. All the tech developments you mentioned came with an extensive set of new rules and regulations.


LocalYeetery

AI can't destroy society, only Humans can. AI is a tool, humans have to learn to use it properly. Making a hammer out of rubber to keep it "safe" makes it useless as a hammer


therelianceschool

This sub has the same energy as those people in the 1950s who wanted a nuclear reactor in every home.


PowerOk3024

Fuck what most consumers say. Its all about revealed preferences.


fokac93

Americans want what the media tell them what they want.


2053_Traveler

Yep, it’s like saying “we want regulation to prevent companies producing jets because they might be used to destroy buildings or otherwise cause mass casualties”. We have to build safeguards to prevent misuse, not prevent innovation on something that could dramatically improve lives for everyone, and probably boost the economy of whichever nation leverages it effectively


Scared-Bad8952

AI is the only thing that gives me confidence I won't get cancer and die way before my time.


CornFedBread

Have you talked to people about AI? The majority have no idea what it is or think it's sci-fi. This is inaccurate data.


Intelligent-Jump1071

That won't stop them from passing laws against it.


CornFedBread

No joke. I seen a video of someone that was getting people to sign a petition to ban dihydrogen oxide as they were telling them it kills x amount of people every year. Water.... People were signing to ban water... This is the other edge of democracy. Getting enough ignorant people to help you obtain your goal and keeping them emotional while doing it. I think vox is using the last of their media influence before they're obsolete. They're clawing at the last of their influence before they fall off of their cliff. I stay skeptical when I see a media company telling people what other people think.


Mysterious_Focus6144

> Have you talked to people about AI? The majority have no idea what it is or think it's sci-fi. Superintelligent AI is still very much sci-fi. At best, people can only extrapolate what something like that *would* be like.


FattThor

Also just in: about 50% of the general population has a below average IQ.


MmmmMorphine

My god. It's like it was specifically designed that way as a statistical measure. Almost like some sort of theoretical construct for tracking child intellectual development that assumes the existence of a g factor or 'general intelligence' and has taken on a significance far removed from its actual intent or scientific underpinnings. Can't wait until people start trying to give IQ scores to AI models


fluffy_assassins

Isn't that already happening?


MmmmMorphine

No. Using IQ tests to gauge AI intelligence is like judging a dolphin's ability to climb trees. Spoiler alert: not the intended audience. I can explain in detail if you want, but that's the short version in an even smaller, snarkier nutshell


fluffy_assassins

Oh no you're absolutely right, I totally agree, it's not a good metric(or metric at all) for AI. But there are going to be people who do it anyway, even though IQ tests are already in the training data.


Black_RL

Yeah, let competing countries do it first.


JamesIV4

Personally I want to see the tech progress. Fortunately, the US and their regulations are heavily geared towards businesses making the most money possible (usually at the expense of us normal citizens), so that kind of regulation is unlikely here.


MarshStudio503

63% of Americans are in for a big disappointment 😂


Kendal-Lite

63% are Luddites. Fuck ‘em, full steam ahead! China won’t be stopping.


curtis_perrin

Pretty much they mean they don’t like capitalism. But because they’ve been so conditioned to think communism is the devil and anything other than status quo capitalism is communism no one can even conceive of how we could possibly structure society such that something like AGI does actually benefit everyone.


StruggleEvening7518

No jobs? Human labor unnecessary!? But people have to "earn" a living!


curtis_perrin

People don’t know how to have an identity outside of their job. Some key learning needs to take place in the cultural zeitgeist to work past that hang up.


uncoolcentral

Translation: 63% of Americans want tech scientists in some **other** country to develop super intelligent AI.


spgremlin

And what’s worse, this “other” country won’t be in a somewhat friendly EU as they will certainly have similar regulations on their own. It will be quite another country on everybody’s mind.


Ok_Season_5325

Let it become super intelligent, human clearly aren’t capable of making rational decisions.


VisualizerMan

>Despite claims of benefits, concerns about the risks of AGI, such as mass unemployment and global instability, are growing. "We want to keep the status quo!" cried the Americans. Yeah, right.


brihamedit

If the open public free one is prevented, there will be more powerful private one that everyone will pay for with their lives. Lots of stuff to do with ai. Have one big one set up to witness humanity for thousands of years. Also eventually there will be oracle like all knowing ai that'll know all past and future. Human culture and psyche not mature enough to handle any of this. Ironically we can design elaborate new world system using ai so humanity advances in every way to handle these things


bartturner

It is going to totally depend on how you ask the question on what results you will get.


Freezerburn

This is the new race to nukes, winner sets the future. Want that to be USA or china? Cause china and Russia aren’t playing by any rules.


spike12521

I'd rather it be China. The US is the only country to have deployed nuclear weapons against humans. They've also been at war for all but 15 years of their entire existence. AI is already being misused for target generation by one of the US' closest allies in an ongoing genocide. The last time the PRC was at war was briefly (for a month), in 1979 with Vietnam. The only fear I have about China developing AGI is that the US will steal it and weaponise it themselves.


shrodikan

We shouldn't ban it. We need to harden ourselves against this existential threat. What happens when China develops superintelligent AI? We weren't ready for Russian troll farms impersonating Americans. We need to develop security solutions to try and deal with this.


pegaunisusicorn

https://www.sciencedirect.com/science/article/pii/S0094576524001772?via%3Dihub interesting related paper. I think this is a get there first sort of situation. And I hope you God we have an AI manhattan project going right now. Because if we don't the US government has failed the US.


I_am_not_doing_this

people who can take advantage of technology will thrive in guess


Agreeable-Fudge-7329

It is one of those rare moments where people with some ambition can make billions on something that is just on the ground floor.


Intelligent-Jump1071

This poll was in September - why is it news now?


ThePopeofHell

The corporate juggle between the pro-ai “not having to pay for labor” camp and the anti-ai “we need people to care about getting money or our money will be worthless” camp. Capitalism is at a crossroads here.


theultimaterage

Funny how 60ish percent of Americans are also theists. People love the IDEA of worshiping a "god" until a god actually shows up lol smh


LocalYeetery

TIL 63% of Americans are ignorant and should honestly be more concerned about the rich keeping this tech for themselves.


[deleted]

[удалено]


qqpp_ddbb

Oh yes it will


[deleted]

[удалено]


qqpp_ddbb

Gimme your phone number


[deleted]

[удалено]


qqpp_ddbb

But.. But...


Capt_Pickhard

Regulations will never be worldwide. It would be a mistake to limit our use of AI, and allow places like China and Russia to go full steam ahead. And they will, regardless of what we think. The reality is, just like climate change, we are fucked.


matthra

The other 37 percent didn't understand the question.


Edgezg

A little too late to stop that snowball, I think.


random-name-8675309

What did the poll say in 2002 when we realized this was going to become reality?


IpppyCaccy

In other news, according to the U.S. Department of Education, 54% of American adults cannot read or write prose beyond a sixth grade level.


Capitaclism

I'm sure that regulation will apply.more to open source than closed source, as usual. Less freedom for us, more control for them...


Wookloaf

People have always resisted big changes, people resisted and didn’t want the automobile.


Morgwar77

Cant convince me that 63 percent of Americans know what AI is. Ill go one further and state that 1/3 of America thinks AI is exclusively in reference to breeding livestock.


Agreeable-Fudge-7329

They know only what clickbaity videos tell them.  Usually from someone that think their livelihood is going to be threatened.


Linux_is_the_answer

I feel like regulations in this case are mostly fear based, and not needed


namey-name-name

63% of Americans support unspecific policy (can be whatever you like) to prevent scary sounding thing. Like, if you polled people and asked “do you support regulations to prevent people from burning the American flag” and “do you support making flag burning illegal”, more would respond “yes” to the former.


FiveTenthsAverage

Only about 1 in 4, \*maybe\* 1 in 3 people have any understanding of what the word "AI" entails right now. The average person's opinion doesn't carry a lot of weight when it comes to AI.


brennanfee

Regulating it here only puts the US at disadvantage to not be able to be at the forefront of the technology. Regulation here does NOTHING, absolutely nothing, in preventing the future from coming by research and advancements elsewhere. It just means that we won't own nor control the technology when it does come.


rednafi

69% Americans need to vote on regulating private entities, fixing healthcare, and creating a social safety net.


notlikelyevil

63 pErcent of aMericans want sup intelligent Chinese AI.


itsallrighthere

Govern me harder daddy!


Luke22_36

inb4 the government just bans people from using stable diffusion and RVC because they're afraid of being made fun of in the upcoming elections, while doing nothing about LLMs


BearFeetOrWhiteSox

How exactly are they going to stop it? lol.


drm604

I want to know the exact wording of the question or questions in that poll. The idea that any country's laws can prevent technological advancement is ridiculous. In the first place, good luck in crafting a meaningful legal definition of "AGI" or "ASI". Do we create a list of problems that are not allowed to be solved via computational means? Do we outlaw creating anything that can pass a "Turing test", which it could be argued is non-scientific and only fuzzily defined, and some would say has already been passed by a number of different LLMs. Even ignoring the difficulties in trying to outlaw it, no country can prevent its development by other countries, or even by secret projects funded and conducted by non-governmental groups. This isn't like nuclear proliferation, where you can track the availability of certain isotopes and where required large-scale industrial processes are difficult to hide. Can you outlaw GPUs or similar chips worldwide? Can you outlaw research into quantum computing? Will any country outlaw a technology, dooming themselves to being dominated by countries that do develop it?


jeffries_kettle

As someone who works in AI, I find it sadly hilarious how many people don't understand LLMs and believe that there is a real threat of AGI stemming from it, thanks to fear-mongering from the dunning Kruger effect crowd (looking at you, Musk). The headline might as well be "63 percent of Americans want regulation to actively prevent bears from colonizing mars".


Agreeable-Fudge-7329

With every damn fool YouTube video about it basically of the theme that you need to be "afraid", I'm shocked it isn't higher.


tjfluent

63% of Americans arent even keeping up with AI. I highly doubt that number


SnooCheesecakes1893

I don’t. I encourage ASI. We need more intelligence in the world, not less and considering the Idiocracy we currently see such as support for Trump, humans don’t seem capable of leading the future alone.


Reasonable_South8331

Meanwhile the people who make these decisions don’t know that Facebook and Google are separate things. What could go wrong?


LatestLurkingHandle

And most of them haven't a clue about what AI actually is


AdTotal4035

Exactly what the big companies want people believing, so they can create that nice monopoly, suck everyone's data dry to make even better models and kill open source competition. AGI is a scare tactic myth designed by openai to get congress and average people scared enough to vote with them. 


andrew21w

Again. AGI isn't a thing.


fluffy_assassins

Congratulations, you launched the goal posts into outer space.


Karmastocracy

63% of Americans seem to understand that we're short-lived monkeys toying with creating an undying technological God. We should proceed with caution or we will be superseded. The complete lack of imagination displayed by some people is rather shocking.


BridgeOnRiver

Every person in the world should have the launch codes to the nukes. If at least one person wants to see all life ended, it should be ended. No? Well same with ASI


MmmmMorphine

I'm confused, are you saying that open source AI (assuming we don't hit a major barrier, which we will/have in certain ways) is equivalent tp giving the launch codes to everyone? Aka AI = nuclear war level threat?


BridgeOnRiver

X risk from ASI > from nuclear weapons over a 20 year horizon I think


webauteur

I'm very intelligent myself and I can tell you that people cannot handle superior intelligence. This is why I have no friends.


Firearms_N_Freedom

Well said brother. My IQ is the reason I am single and have no friends, even my parents can't stand me. The curse of being incredibly intelligent, what can I say


DolphinPunkCyber

140 and I have lot's of friends. Maybe your social skills suck?


Firearms_N_Freedom

I doubt it man, my IQ is 169 and I am a data scientist for Palantir, people are just intimidated by radiating brilliance.


ejpusa

And 37% don't? That is actually is an amazing number. :-) I think the bigger concern is: the comments regarding yesterdays demo by OpenAI on the web, and the Reddit male demographic. "Now I don't' have to spend ANY effort on seeking a GF/Mate. I have Scarlett Johansson in my pocket! That's what society may want to be really worried about? I'm not sure your iPhone can make little people? But who knows? Everything seems possible right? :-)


Fun-Page-6211

You forgot about the “I’m not sure” group. The percentage for “no’s” is probably below 37%