Current location - Training Enrollment Network - Mathematics courses - Teagen mathematics
Teagen mathematics
The original title is: the machine of love and grace: seeking the common ground between human beings and robots (literally translated as: loving machines: exploring the same foundation between human beings and robots). The loving machine comes from a poem by the poet Richard Brautigan: All people are watched by loving machines.

The "machine" here refers to two kinds: AI (artificial intelligence) machine and IA (intelligent enhancement) machine. Corresponding to the two technical camps, the former hopes that machines can realize human capabilities, become intelligent and even have consciousness like humans, while the latter hopes that machines can be used to strengthen or expand human capabilities.

In my opinion, although the starting point of the two is completely different, they are not completely different in technology, especially in "making human beings stronger and possibly replacing human beings"

As a generation who came from 8086\386\486\586, they witnessed the changes of the world in the past 30 years, and then, perhaps it was the real "miracle-witnessing moment", at least at present we saw the Alpha dog (and everything has just begun).

To be or not to be? The book repeatedly asks a question: "Will intelligent machines become our slaves, partners or masters?" Perhaps this question can also be changed to: Will humans become slaves, partners or masters of machines?

I thought about this question before reading this book. I wonder: before the machine really has self-awareness, does it have the idea of "I"? Without me, there is no relationship based on me (slave, partner, master, etc.). In the interaction with machines, human beings have the illusion of being "like human beings", but this is just like mistaking a lifelike wax figure for a real person, which is quite different from an intelligent robot with real self-awareness.

Can artificial intelligence finally make machines have self-awareness, instead of relying on human input data and algorithms to respond? Or after completing the initial setting, can you learn and improve yourself without relying on human beings and have the ability to upgrade yourself?

At least before the arrival of this day, the relationship between machines and humans has not yet risen to the philosophical or ethical level, and intelligent robots are still regarded as human tools (at this time, the nominal AI is essentially IA). However, with or without it, there will be differences between people. For example, in the financial industry, people with powerful automatic trading systems based on algorithms have more wealth than ordinary people, and even make some people go bankrupt. These are already realities, not predictions or science fiction. By analogy, due to the development of artificial intelligence, genetic engineering and robotics, the fate of many people will inevitably change predictably or unexpectedly in the future. Another extreme example: the artificial intelligence technology on the new American weapon LRASM long-range anti-ship missile will also make it automatically set the attack line and cooperate with other missiles to attack the line. According to the design, LRASM can still fly into the enemy fleet after losing contact with human controllers, and use artificial intelligence technology to decide which targets to attack. This makes LRASM a real "unmanned aerial vehicle" and a killing machine.

Therefore, my answer is: some people will become slaves of machines, some will become partners of machines, some will become masters of machines, and some may become a mixture of people and machines (people with human brains and machine bodies)!

John Markov, the author of Dancing with Robots, thinks: "Will machines replace human workers or enhance their abilities? To some extent, both results can be achieved. " "In most cases, people will decide which technologies to implement based on benefits and efficiency, but it is also obvious that new moral deduction is needed." But obviously, when interests and morality are opposed, the situation will become subtle. In reality, when huge benefits exist, people tend to avoid the importance of morality and even go to the opposite side.

Markov used many chapters to review the development of intelligent machines, such as driverless cars, machine learning, algorithms, rescue robots, and Apple's Siri. Based on these events, he thinks he has found the answer: "In the minds of human engineers and scientists, a choice has been made: people-oriented."

I think this answer is self-deception, because people in "people-oriented" inevitably contain the dark side of human nature. Regardless of the technological singularity (meaning that at a certain point, the intelligence of the machine will surpass human beings), what choices the machine will make, before that, the greed and desire in human nature are enough to blacken the word "people-oriented". For example: drugs, from picking to artificial cultivation, from chemical synthesis to electronic drugs (video games that make countless people addicted to them).

Intelligent machines will not be controlled within a certain range like nuclear weapons and nuclear power plants. Nowadays, intelligent machines are everywhere, from wearable devices such as watches to glasses to artificial organs implanted in the body. In the future, the living and working space will be filled with intelligent machines of various types and sizes. It is not difficult to imagine people's attitude towards mobile phones. When smart phones gradually penetrate into all aspects of people's lives, their functions become more and more mature and their performance becomes more and more powerful, people's lives will gradually be inseparable from mobile phones, and even human beings themselves will begin to degenerate. For example, more and more evidence shows that relying on GPS to find directions and correct navigation errors will affect our memory and spatial reasoning ability (these are very useful survival skills).

Wiener, a famous American mathematician and founder of cybernetics, once predicted:

In fact, people are suffering from inner discomfort, greed and ignorance, and they can't stop. People's will is so weak that it is difficult to control themselves. If human beings not only do not seek evolution in the spiritual field, but are addicted to sensory enjoyment, unwilling to think independently and satisfied with the answers told by intelligent machines, then, no matter how developed intelligent machines are, this inertia itself is a disaster. I'm afraid all these beautiful pictures watched by loving machines are just the poet's wishful thinking.