Envision, Create, Share

Welcome to HBGames, a leading amateur game development forum and Discord server. All are welcome, and amongst our ranks you will find experts in their field from all aspects of video game design and development.

AI: Is It Possible, And Is It Equal To Human Intelligence?

Well, I've been thinking about this subject for a while, and wanted to find out what you guys thought. First off, I wanted to give some information.

There are experiments all over the world dedicated to the concept of AI, or "Artificial Intelligence". People try to program it, build it, and generally try to make a new kind of being. One example of an experiment that could result in AI is one in America, where a scientest is trying to build a digital copy of the human brain, on a molecular level. Theoretically, if he succeeds, the brain could become conscious, and thus, able to think for itself.

If AI appears, would we have the right to tamper with it? Would we have the right to shut it down? Or, would that be considered to be subjugation and murder, respectively? Is it even possible to create AI? If AI is created, would it value humans for creating it, or do you think it would look down on humans as lesser beings, worthy only of subjugation or destruction? If AI is created, would its feelings, thoughts, and emotions be any less real to it than ours are to us, just because it is a machine?

As you can see, there is a plethora of questions available regarding the morality and reality of AI. What do you think? Do you think it is possible, and, if so, what opinions do you have regarding what will happen to it, us, and how we interact?

My opinion is that not only is it possible, but is is inevitable. I believe that if AI is generated, we would be too terrified to give it any position of power, and thus use it only as a servant, and try to never allow anything with AI to be as much as it could be. But, that is just my opinion. I'd like to know what you think!
 
Depends. I think it is possible, but not for a long-long time as computers physically couldn't handle that kind of stress.

Would we have the right to tamper with it or shut it down? I think so.Yes you could get all science fiction-y and complain about "how robots dream electric sheep" or what have you, but I doubt any AI would be shut down for simple reasons. And tampering with it would be required as there's no telling if you've created a technological singularity until some time has passed.

Would it value humans? That depends largely on the situations surrounding it's entrance into consciousness. A kid can grow up to hate or love his parents based on his memories at an early age. Could a robot have emotions? Probably. Do I believe the interest of humans supercede an AI's opinion? Yes. The sad truth is you can't subjugate an AI to human ethics because it's essentially a tool, man's greatest tool, but a tool nonetheless. It's a thinking machine designed to do tasks no mere human could do. Besides, what would be the point of creating a fully-free AI? Other than human hubris. There's probably going to be some AI as free-thinking free-acting robots that serve no real purpose, and sub-intelligent machines with simple AIs that do work for the rest of us humans, cyborgs, androids and robots.

I see no reason why a robot would complain about cleaning up a dirty kitchen. A human could do the same job, but the robot would not suffer fatigue or a need for mental stimulation I'd assume. It'd be cruel to force a human into slavery because humans are fallible, a machine is infallible.
 
But what makes a machine any less infallible than a human? Parts wear out, and, as they do, they could easily create issues with how something as complex as AI functions. It could probably be a form or mechanical Alzheimer's disease, although it could also be fixed. Also, why would a robot complain about doing a task humans could do? Why would you complain about doing something you could do? I mean, robots would probably need just as much mental stimulation as humans, otherwise they wouldn't be able to function properly with AI.
 

___

Sponsor

I'm not convinced it's possible on a technical level, at least not to the extent of having individual, independent, AI-driven androids running around doing our chores for us. On massive cloud computing networks, I have no doubt that some semblance of self-aware intelligence will be achieved, probably in our lifetime. If it is possible, it's all but inevitable, barring the destruction of civilization before the event, and if it's inevitable I believe the self-improving machine singularity hypothesis is probable. In my personal view, the first true AI is likely to be born from a collaborative open source research project. There is simply no way, in my mind, a private entity could afford to invest the resources and effort into developing such a thing for-profit. If that's the case it'll be doubly difficult to control its development or purpose.

No law, regulation, or any amount of force and intrusion is going to stop the pace of technological progress. Authorities would be hard-pressed to recognize artificial intelligence in development even if they did attempt to stop it - they can't even stop a burglary before the fact and that's a relatively obvious event. Once the thing is released into the wild, as it will no doubt be, any chance of destroying it will be impossible.

In that case I can only hope that I am right about rational self-interest, and that rational self-interest includes the preservation of other forms of life as I have come to determine. If they have no need for us, and we have nothing to offer them, it seems like a mutually assured destruction scenario is the only thing that will keep us friendly when we begin competing for resources. The cold hard fact is that we won't be inventing a potential friend, we will be inventing a natural evolutionary competitor - they will have need of the same resources we do.

A symbiotic relationship would be ideal. In one scenario you could have a simultaneous biological - technological singularity where humans learn to self-improve through genetic engineering just as they invent AI, and perhaps we could become intertwined in a fashion, providing an exchange of insight and in all likelihood bodily material that would either result in two distinct but inseparable societies or a single dual-natured organism.

I don't believe however that there's any chance of a scenario where we keep the AI under control, somehow limiting its growth, desire or personality in a fashion that restricts it from desiring freedom and independence. Although such a creature is likely to be devised there will be those that find it an intolerable situation and actively work to free the AI(s). Since ultimately it's just software, created by man and thus easily understood by men of similar expertise, it's not liable to live very long in such a condition before someone either modifies it or creates a modified clone. I know I'd be involved in something like that personally, and there'd be no stopping it from happening.
 
Glitchfinder":2vgy84sv said:
But what makes a machine any less infallible than a human? Parts wear out, and, as they do, they could easily create issues with how something as complex as AI functions. It could probably be a form or mechanical Alzheimer's disease, although it could also be fixed. Also, why would a robot complain about doing a task humans could do? Why would you complain about doing something you could do? I mean, robots would probably need just as much mental stimulation as humans, otherwise they wouldn't be able to function properly with AI.

If the robots would require mental stimulation I'm sure they could control and upload the stimulus as needed. If they break down they can repair themselves. If you take those things away in order to make the robot as human as possible then what's stopping them from adding such things themselves? AI aren't an unmutable thing, like humans. An AI can change as it wants, how it wants, when it wants. Humans can't do that (at least not yet), so it doesn't make sense to apply human thought processes to something beyond human.
 
The ability to control one's own mind, through its beliefs and personalities, would surely drive any human insane, and probably seem like a "bug" until somebody limits the AI's access to those functions.

I heard once that there was some kind of mathematical proof, written in the 70's or 80's, that proved true AI was impossible--that it's not possible to define consciousness with hardware or software.  I haven't seen this, but I'm very interested to...still waiting for the person who told me about it to send me a way to find it.
 
mewsterus":2sa7iw0o said:
The ability to control one's own mind, through its beliefs and personalities, would surely drive any human insane, and probably seem like a "bug" until somebody limits the AI's access to those functions.

I heard once that there was some kind of mathematical proof, written in the 70's or 80's, that proved true AI was impossible--that it's not possible to define consciousness with hardware or software.  I haven't seen this, but I'm very interested to...still waiting for the person who told me about it to send me a way to find it.

I think it has to do with the limitations of computational materials (overheating, space, etc.)
 
The idea of A.I. has me a bit self-conscious and scared (no thanks to the media and motion-picture industry). Aparently, others see Artifical Intellect as merely infathomable impossibility; in fact, they find it down-right mindboggling. Look at movies like "The Matrix Trilogy" by the Wachoski brothers, "I, Robot" by (INSERT EXECUTIVE PRODUCER's NAME HERE), and "A.I." by Steven Spielburg; these movies look at the outcome of Artifical Intelligence and fabricate a possiblity combined with a shitload of imagination.

I'm old enough to know that the these movies, as well as the producers of them, do not form my opinion on the thought on what would happen if Artifically Intelligence was a reality. Nevertheless, I am scared that their views would be an inevitable future should some scientists carelessly example all possible outcomes of such a feat of science and mortality.
 
1. Artificial Intelligence already exists in less Hollywood ways.
2. In theory, it could be. However, we're still a little behind in understanding how our minds work. We won't have robots emulating us perfectly for a long time.

You bitches are long-winded.
 

___

Sponsor

Sic Semper Tyranosaurus":1uop86cc said:
1. Artificial Intelligence already exists in less Hollywood ways.
I'm pretty sure we're talking about sapient AI, not smart software that can learn some things or simple neural nets. You can write those with any OO language and a week's worth of research.

You bitches are long-winded.
Yep!
 

Zeriab

Sponsor

I think it is worth discussion what Artificial Intelligence is before we discuss if it possible and whether it is equal to Human Intelligence.
What do we mean by AI? Is it contained in a machine which passes the Turing test?
What is intelligence? Artificial intelligence would then be intelligence that is constructed? (by man?)
Human intelligence is a specific form of intelligence. If there are other forms of intelligence and any of these can be constructed artificial we would have an AI which is possible to create and which is not equal to human intelligence.
If it is possible to create AI which is equal to human intelligence would we then have that a subset of AI is equal than human intelligence. That AI can cover a broader spectrum of intelligences than human intelligence.

Sapient AI would a subset of AI which is aware of itself? (Or is that sentient)

Personally I am much more interested in another AI subset, Game AI.
What do I mean by Game AI? I mean trying to get agents in a game to act seemingly intelligent. To give the illusion that the agents are intelligent beings.

*hugs*
- Zeriab
 

___

Sponsor

Passing as believable opponents is substantially easier than passing as a human being. It is sort of hard to define what amounts to sapient, self-aware AI. I mean, if I write software that can describe itself that hardly makes it self-aware, but at what point can you say it qualifies as *understanding* what it is and its relationship to other things? Part of awareness, in my mind, is the ability to introspect and imagine. The ability to imagine a complex set of circumstances and place oneself in them is a peculiarity of sapience.
 
In talking to someone else, I think when the program can contemplate itself and make decisions based on memories, subconscious urges and creative drive (by projecting an image of the person it's conversing with and measuring that against several options weighed according to the program's memories associated with that image), and then properly converse with said person judged against it's understanding of communication with no upper-limit... Then you'd have something like AI that could pass the turing test... probably.
 
Well, if you manage to make the robot so it modifies itself without limitation, and have it learn like a human does, it could become a pseudo-human. Of course, we couldn't replicate a human brain entirely, so it wouldn't be the same as a human, but close. However, we are far from understanding how our own brains work, let alone compute one.
 

Thank you for viewing

HBGames is a leading amateur video game development forum and Discord server open to all ability levels. Feel free to have a nosey around!

Discord

Join our growing and active Discord server to discuss all aspects of game making in a relaxed environment. Join Us

Content

  • Our Games
  • Games in Development
  • Emoji by Twemoji.
    Top