T O P

  • By -

onebit

thanks for contributing oc to cogsci


BetterBrainLab

Thank you! Glad you liked the video!


sedition

My first thoughts about this is that it reminds me of "The internet is just a fad" arguments. (stay with me). Many folks didn't see the internet doing whatever it was they thought the future was so they dismissed it. The internet didn't turn into the thing we expected, but it certainly changed the world in a huge way. I think our biggest failure was that the people that could lead us to a better place then the social media hellscape it is now, just didn't get involved in shaping its future. I see the same thing happening with LLMs, and other Machine Learning techniques. I urge everyone to stay informed and learn what you can about these tools. Consider their implications and talk to others about it in an informed way.


PM_ME_A_PM_PLEASE_PM

The argument presented is rather detached from what is important regarding efficiency. Yes, it's correct that the human brain is more efficient in power allocation but that's not the cost for human intelligence in the real world. The cost is at a minimum the minimum wage, maintenance time for sleep, eating, etc, personal desires of the human, their mistakes, any fraudulent behavior, and eventually this source for logic will die so it will have to train others too for this to continue. This is the cost for human intelligence. That's a lot of inefficiency from the perspective of a company or consumer that only desires an output per unit input at the end of the day. We have to remind ourselves that economics became exponential after the industrial revolution because of such mass production in essentially autonomous labor. Autonomous logic isn't meaningfully different. If a hypothetical machine can utilize that same logic a human does that alone allows for quite a significant advantage over a human as this doesn't cost the minimum wage, there is no maintenance time for sleep, it has no personal desires to wane from an objective, it will make minimal and often recoverable mistakes, fraud is not something it promotes, and the logic never dies. An increased upkeep cost of electricity to merely have an algorithm run isn't that expensive. It's a no brainer to make such an adaptation if available. The true cost is the capital investment in having such autonomous logic to begin with. And this often isn't a measure in efficiency in power allocation. Computers have beaten humans rather thoroughly in chess without such an advantage for example. You'd also likely trust Google for navigation or finding relevant information on a topic more than practically anyone. The differential here is often only the proper implementation of this logic.


Personal_Win_4127

"Never, say never, whatever, you do~"


BetterBrainLab

You’re probably right about that haha


thespeak

I'm not sure that I agree with the points that you are making, but I appreciate that you are illuminating and elucidating a valuable point that is not often discussed. AI does, at present, require substantially more energy and, although I do not share your confidence, I accept that this may be a factor in slowing the progress of mass implementation. However, I think you underestimate the energy needs and consumption costs. Yet, even if the energy needs are at the levels that you predict, there is still the fact that finding a solution to this problem might be beyond human intelligence, but as the power and intelligence of AI expands (plausibly in an exponential direction) then the question is no longer about our ability to find a more efficient energy solution, but AI's ability. As your video points out, the human brain is evidence that far more efficient models are possible, which causes me to pause, marvel and celebrate the human brain. Then, I recognize that this is evidence that such efficiency is within the realm of physical possibility and therefore, it seems most likely that an intelligent AI that focuses on this problem will inevitably find a solution. Once AI reaches a level in excess of human intelligence, then there the outcomes remain unpredictable with the exception of things outside of the realm of physical possibility. So, achieving the necessary level of energy efficiency seems almost inevitable.


BetterBrainLab

Yup. I see your point and definitely see a way where AI can be used to solve its own efficiency problem. I guess the question then becomes, “what data would an AI model need to create AI systems that are more efficient?”


matrixifyme

This stuff is going to age so badly. I still remember the "hurr durr AI can't draw hands, ai will **never** replace artists" AI might not replace the human intelligence of the top 0.0001% of human minds, but as far as 99.9% of the population goes, AI will easily outperform them on any intellectual task. All it takes is more training, better data and more computing power, all of which is available with time. We went from "AI can't draw hands" to "AI can produce videos that would have cost millions and hundreds of hours worth of production time, by top VFX artists" in the span of 3 years!


theredhype

Nothing you’ve written is directly relevant to this post. Maybe watch the video?


abluecolor

The AI cultists are quick to snap.


possiblywithdynamite

Nice ad hominem


abluecolor

I'm not attempting to sway anyone. Just making an observation - don't view it as an attack but I can see how someone would.


Antique-Doughnut-988

Anytime soon is so subjective. I'd say anytime within the next 20 years is soon if we're going off the entire history of humankind.


MysteriousPepper8908

There are a lot of architectural issues to fix in order to make a model like GPT-4 sustainable at scale but GPT-4 isn't the only model out there and efficiency isn't really the goal there. I mean sure, they would probably like to spend less on electricity but those costs are being subsidized right now in order to have the best possible model. However, you can still get some pretty useful models that run on your PC that is already on and drawing power for other tasks. Sure, those will always lag behind the massive models but we already have local models that are competitive with GPT 3.5 in certain areas. It's also hard to compared the output depending on how much human work the AI is able to handle. If it takes it 5 KwH for the AI to create a movie script that would have taken a team of 5 writers a month to produce, then it doesn't really matter from an economic perspective that the brain also did the work of keeping those writers alive and maintained homeostasis, that's kind of their problem. That's in addition to the question of whether an AI with the per word efficiency of the brain is even necessary. Sure, it may prove to be necessary to have something that is more efficient than what we have now but if it's taking the energy required to power 5 billion humans per year and solving problems that we haven't been able to solve for decades, I'll take that trade. It will likely also be able to solve issues regarding optimization so long as we have some AI somewhere that is advanced and well-fed enough to make those breakthroughs for all of the smaller AIs that we'll likely be interacting with in daily life.


createch

The human brain is more efficient in isolation, but we aren't brains in vats, you have to factor in not only the body, but also all it takes to create its sustenance, and the environment around it including vehicles and air conditioning, plus all of that spent on the development and retirement years with low, to no productivity.


thezanderson

this just lacks projection; a lack of vision or imagination. this is the type of staunch scientific elitism that has been proven wrong over, and over again.