Тесты с выбором ответа
(current)
Английский, ЕГЭ
Английский, ОГЭ
Русский язык
ЕГЭ
ОГЭ
Математика
ЕГЭ, базовый уровень
ОГЭ
Обществознание
ЕГЭ
ОГЭ
Информатика
ЕГЭ
ОГЭ
История, ОГЭ
География, ОГЭ
Физика, ОГЭ
Статьи
Все статьи
Слова по темам
Фразы по темам
О проекте
Тест 133. Чтение. ЕГЭ по английскому языку
1)
Установите соответствие между заголовками
1 — 8
и текстами
A — G
. Используйте каждую цифру только один раз.
В задании один заголовок лишний
.
1.
The power of music
2.
Sound producers
3.
Special knowledge needed
4.
Musical sound characteristics
5.
Differences in perception
6.
The history of music
7.
The choice of music matters
8.
Different types of music
A.
Music is a group of sounds that people have arranged in a meaningful way. Some musicians make up music as they perform. Others sing songs or play pieces that someone else created. Musicians have developed a system for writing down music so that others can play it again. They use certain symbols, called notes, to indicate the tones to be played or sung. The arrangement of the notes shows the order in which the tones should be played. Other numbers and symbols show how fast to play each note. They are known as musical notation.
B.
Some music goes along with religious ceremonies. Other music is a part of everyday life. Traditional music made by everyday people is called folk music. Classical music is formal and artistic music that developed in Europe over hundreds of years. Orchestras, choirs, and chamber ensembles (small groups of musicians) often perform classical music. Opera is a type of classical music that features dramatic singing. When large numbers of people enjoy a type of music, it is called popular music.
C.
People use their voices to sing. To make other kinds of music, they use musical instruments. Stringed instruments, like violins and harps, have tight strings that make sounds when people pluck or rub them. Wind instruments, like trumpets and saxophones, make sounds when people blow into them. Percussion instruments, like drums and rattles, make sounds when people hit or shake them. Keyboard instruments, like pianos and accordions, make sounds when people press their keys, buttons, or levers.
D.
Rhythm describes the length of musical sounds. The most important part of rhythm is the pulse, or beat. Melody is a series of different tones, or sounds, in a piece of music. Harmony takes place when people play or sing more than one tone at the same time. Groups of tones played together are called chords. Harmony also describes the way chords go along with a melody. Form is the way that people put rhythm, melody, and harmony together. There are many different types of musical forms. Repeating a short melody is one of the simplest forms.
E.
If you want to firm up your body, head to the gym. If you want to exercise your brain, listen to music. Many of us instinctively know the effects of music on our mood and energy. There are few things that stimulate the brain the way music does. If you want to keep your brain engaged throughout the aging process, listening to or playing music is a great tool. It provides a total brain workout. Research has shown that listening to music can reduce anxiety, blood pressure, and pain, as well as improve sleep quality, mood, mental alertness, and memory.
F.
Listening to classical music has a wide range of benefits for your brain and body. For example, it can help you with relaxation, concentration, memory, and cognition. Listening to relaxing music, such as smooth jazz, can induce an alpha-wave state in your brain. These waves occur when you’re awake but relaxed, making smooth jazz one of the best ways to wind down at the end of a long day. Rap music often tells stories of people overcoming obstacles or achieving success in the face of unlikely odds.
G.
If you ask some people about the benefits of listening to music while trying to concentrate, you could hear mixed reviews. Listening to music to help us concentrate works differently for everyone. Some people might think it’s a remarkable study habit, while others may find it useless because it only distracts them. But branching out and trying new ways of boosting your concentration might help you find a practice that works well. It isn’t easy to find strategies that work specifically for you.
A
B
C
D
E
F
G
🔗
2)
Прочитайте текст и заполните пропуски
A — F
частями предложений, обозначенными цифрами
1 — 7
. Одна из частей в списке 1—7
лишняя
.
The roof of the world
Did you know that Tibet is called the “roof of the world?” Tibet is a small country surrounded on all sides by gigantic snowy mountain peaks. For thousands of years, these towering mountains acted like a fence,
___ (A)
. That’s one reason why explorers and writers have called Tibet the roof of the world. It’s hard to get to. The other reason is Tibet’s high elevation. When people climb mountain passes over 17,000 feet above sea level, they gasp for air
___ (B)
!
Years ago, the people of Tibet were nomads,
___ (C)
. The ground in Tibet is much too rocky and thin to grow crops, so Tibetans centred their daily life and survival on the large ox, the yak. The yaks provided the nomads with nearly everything they needed –
___ (D)
. Even yak dung was used for fires.
Tibetan nomads would lead their herds of yaks and sheep across pastures, valleys, and mountainsides in search of the best grazing lands. They did not live in permanent homes made of wood, brick, or stone. When nomads arrived at their destination, they were so skilled at setting up their large yak-hair tents that they had them up in minutes. They could even compete to see
___ (E)
, a fire going, and hot tea poured.
Times are changing in Tibet, and more and more people live and work in villages and cities. But there are still nomads
___ (F)
.
1. which means people without permanent homes
2. who would be the first one to have their tent up
3. as they may need 30 yaks to carry their supplies
4. milk, butter, meat, and wool for clothes and ropes
5. as they are more than three miles high above the sea
6. keeping people from entering or leaving the country
7. that survive in the mountains just as their ancestors did
A
B
C
D
E
F
🔗
3)
Прочитайте текст и запишите в поле ответа цифру
1, 2, 3 или 4
, соответствующую выбранному Вами варианту ответа.
Показать текст
My experience with ChatGPT
A few weeks ago, I asked ChatGPT to write an article and I have to say, it exceeded my expectations. Not only did ChatGPT write a comprehensive article, but it also included helpful headlines for each section. Since then, I’ve been thinking a lot about Artificial Intelligence (AI) and what it could mean for the future world.
I’ve been keeping up with news about AI developments, but ChatGPT really stood out to me. While there are other models out there doing similar things, there must be a reason why ChatGPT
made such big headlines
. I think it’s because people like me started using AI models for the first time and got very tangible results. However, as useful and unique it can be, I do have some concerns.
After receiving the article from ChatGPT, I requested another one using similar keywords. ChatGPT delivered, but the resulting article was 62% similar to the first one. I doubt this would happen if I asked two people to write an article using the same keywords. As humans, we all have unique minds and experiences that shape our thoughts and words. Each person’s creativity is unique because it requires unique brain networks to fire simultaneously.
It’s no surprise that ChatGPT lacks originality, since it’s trained on millions of pieces of information from various sources. AI relies on pre-existing information to produce content. In contrast, humans learn in various ways and may draw different conclusions from similar experiences.
Another challenge with AI is intolerance. AI algorithms are only as good as the data they’re trained on, which can lead to intolerant results. Microsoft experienced this firsthand in 2016 when their AI chatbot on social networks became racist and misogynistic within 24 hours.
Even OpenAI, the creator of ChatGPT, acknowledges the limitations of AI. They warn that it “may occasionally generate incorrect information,” “may occasionally produce harmful instructions or biased content,” and has “limited knowledge of the world and events after 2021.”
The last limitation is telling. Is it possible for an AI chatbot to “live in the present?” Today’s reality is so fragmented and dependent on individual perspectives that even humans struggle to identify the truth. How can scientists train an AI model to differentiate truth from falsehood? If this were possible (let alone easy), why haven’t humans mastered
this ability
yet?
I wonder to what extent technological innovation occurs for the sake of innovation itself, rather than to address and solve a specific problem. Are we using AI to solve significant global issues, or are we merely using it to fix inconveniences?
Recently, I read news about a robo-dog for people with visual impairment. Equipped with AI, it talks and aids them in navigating cities. Why were living service dogs not good enough?
Apparently, they are expensive to train and maintain, so the answer technology offered was – what else? – robots. That would be quite reasonable, but 90% of vision loss is preventable or treatable with spectacles or eye surgery. Millions of people have visual impairment because they don’t have access to such treatments, making it a significant and global problem. Shouldn’t we use our resources to prevent and treat vision loss in the first place?
We have built a world so dependent on technology and so obsessed with growth that we are now willing to put a price on the only thing that makes us unique in this world: our brain. While we have not fully comprehended its capabilities, we are trying to make a digital copy of it. Are we sure we know what this means?
What does the author think of the article written by ChatGPT?
1) Its quality was a pleasant surprise for him.
2) It was worse than he’d thought.
3) ChatGPT should not have used headlines.
4) It made him think about his own future.
🔗
4)
Прочитайте текст и запишите в поле ответа цифру
1, 2, 3 или 4
, соответствующую выбранному Вами варианту ответа.
Показать текст
My experience with ChatGPT
A few weeks ago, I asked ChatGPT to write an article and I have to say, it exceeded my expectations. Not only did ChatGPT write a comprehensive article, but it also included helpful headlines for each section. Since then, I’ve been thinking a lot about Artificial Intelligence (AI) and what it could mean for the future world.
I’ve been keeping up with news about AI developments, but ChatGPT really stood out to me. While there are other models out there doing similar things, there must be a reason why ChatGPT
made such big headlines
. I think it’s because people like me started using AI models for the first time and got very tangible results. However, as useful and unique it can be, I do have some concerns.
After receiving the article from ChatGPT, I requested another one using similar keywords. ChatGPT delivered, but the resulting article was 62% similar to the first one. I doubt this would happen if I asked two people to write an article using the same keywords. As humans, we all have unique minds and experiences that shape our thoughts and words. Each person’s creativity is unique because it requires unique brain networks to fire simultaneously.
It’s no surprise that ChatGPT lacks originality, since it’s trained on millions of pieces of information from various sources. AI relies on pre-existing information to produce content. In contrast, humans learn in various ways and may draw different conclusions from similar experiences.
Another challenge with AI is intolerance. AI algorithms are only as good as the data they’re trained on, which can lead to intolerant results. Microsoft experienced this firsthand in 2016 when their AI chatbot on social networks became racist and misogynistic within 24 hours.
Even OpenAI, the creator of ChatGPT, acknowledges the limitations of AI. They warn that it “may occasionally generate incorrect information,” “may occasionally produce harmful instructions or biased content,” and has “limited knowledge of the world and events after 2021.”
The last limitation is telling. Is it possible for an AI chatbot to “live in the present?” Today’s reality is so fragmented and dependent on individual perspectives that even humans struggle to identify the truth. How can scientists train an AI model to differentiate truth from falsehood? If this were possible (let alone easy), why haven’t humans mastered
this ability
yet?
I wonder to what extent technological innovation occurs for the sake of innovation itself, rather than to address and solve a specific problem. Are we using AI to solve significant global issues, or are we merely using it to fix inconveniences?
Recently, I read news about a robo-dog for people with visual impairment. Equipped with AI, it talks and aids them in navigating cities. Why were living service dogs not good enough?
Apparently, they are expensive to train and maintain, so the answer technology offered was – what else? – robots. That would be quite reasonable, but 90% of vision loss is preventable or treatable with spectacles or eye surgery. Millions of people have visual impairment because they don’t have access to such treatments, making it a significant and global problem. Shouldn’t we use our resources to prevent and treat vision loss in the first place?
We have built a world so dependent on technology and so obsessed with growth that we are now willing to put a price on the only thing that makes us unique in this world: our brain. While we have not fully comprehended its capabilities, we are trying to make a digital copy of it. Are we sure we know what this means?
What does the expression “
made such big headlines
” in paragraph 2 (“…there must be a reason why ChatGPT made such big headlines”) mean?
1) Wrote good headlines for articles.
2) Was widely discussed in the media.
3) Was better than other models.
4) Was used by many people.
🔗
5)
Прочитайте текст и запишите в поле ответа цифру
1, 2, 3 или 4
, соответствующую выбранному Вами варианту ответа.
Показать текст
My experience with ChatGPT
A few weeks ago, I asked ChatGPT to write an article and I have to say, it exceeded my expectations. Not only did ChatGPT write a comprehensive article, but it also included helpful headlines for each section. Since then, I’ve been thinking a lot about Artificial Intelligence (AI) and what it could mean for the future world.
I’ve been keeping up with news about AI developments, but ChatGPT really stood out to me. While there are other models out there doing similar things, there must be a reason why ChatGPT
made such big headlines
. I think it’s because people like me started using AI models for the first time and got very tangible results. However, as useful and unique it can be, I do have some concerns.
After receiving the article from ChatGPT, I requested another one using similar keywords. ChatGPT delivered, but the resulting article was 62% similar to the first one. I doubt this would happen if I asked two people to write an article using the same keywords. As humans, we all have unique minds and experiences that shape our thoughts and words. Each person’s creativity is unique because it requires unique brain networks to fire simultaneously.
It’s no surprise that ChatGPT lacks originality, since it’s trained on millions of pieces of information from various sources. AI relies on pre-existing information to produce content. In contrast, humans learn in various ways and may draw different conclusions from similar experiences.
Another challenge with AI is intolerance. AI algorithms are only as good as the data they’re trained on, which can lead to intolerant results. Microsoft experienced this firsthand in 2016 when their AI chatbot on social networks became racist and misogynistic within 24 hours.
Even OpenAI, the creator of ChatGPT, acknowledges the limitations of AI. They warn that it “may occasionally generate incorrect information,” “may occasionally produce harmful instructions or biased content,” and has “limited knowledge of the world and events after 2021.”
The last limitation is telling. Is it possible for an AI chatbot to “live in the present?” Today’s reality is so fragmented and dependent on individual perspectives that even humans struggle to identify the truth. How can scientists train an AI model to differentiate truth from falsehood? If this were possible (let alone easy), why haven’t humans mastered
this ability
yet?
I wonder to what extent technological innovation occurs for the sake of innovation itself, rather than to address and solve a specific problem. Are we using AI to solve significant global issues, or are we merely using it to fix inconveniences?
Recently, I read news about a robo-dog for people with visual impairment. Equipped with AI, it talks and aids them in navigating cities. Why were living service dogs not good enough?
Apparently, they are expensive to train and maintain, so the answer technology offered was – what else? – robots. That would be quite reasonable, but 90% of vision loss is preventable or treatable with spectacles or eye surgery. Millions of people have visual impairment because they don’t have access to such treatments, making it a significant and global problem. Shouldn’t we use our resources to prevent and treat vision loss in the first place?
We have built a world so dependent on technology and so obsessed with growth that we are now willing to put a price on the only thing that makes us unique in this world: our brain. While we have not fully comprehended its capabilities, we are trying to make a digital copy of it. Are we sure we know what this means?
Why are humans creative in unique ways?
1) Their lives and thoughts are different.
2) They do not use the information that already exists.
3) They are never given the same set of keywords.
4) Their brain cells work all at the same time.
🔗
6)
Прочитайте текст и запишите в поле ответа цифру
1, 2, 3 или 4
, соответствующую выбранному Вами варианту ответа.
Показать текст
My experience with ChatGPT
A few weeks ago, I asked ChatGPT to write an article and I have to say, it exceeded my expectations. Not only did ChatGPT write a comprehensive article, but it also included helpful headlines for each section. Since then, I’ve been thinking a lot about Artificial Intelligence (AI) and what it could mean for the future world.
I’ve been keeping up with news about AI developments, but ChatGPT really stood out to me. While there are other models out there doing similar things, there must be a reason why ChatGPT
made such big headlines
. I think it’s because people like me started using AI models for the first time and got very tangible results. However, as useful and unique it can be, I do have some concerns.
After receiving the article from ChatGPT, I requested another one using similar keywords. ChatGPT delivered, but the resulting article was 62% similar to the first one. I doubt this would happen if I asked two people to write an article using the same keywords. As humans, we all have unique minds and experiences that shape our thoughts and words. Each person’s creativity is unique because it requires unique brain networks to fire simultaneously.
It’s no surprise that ChatGPT lacks originality, since it’s trained on millions of pieces of information from various sources. AI relies on pre-existing information to produce content. In contrast, humans learn in various ways and may draw different conclusions from similar experiences.
Another challenge with AI is intolerance. AI algorithms are only as good as the data they’re trained on, which can lead to intolerant results. Microsoft experienced this firsthand in 2016 when their AI chatbot on social networks became racist and misogynistic within 24 hours.
Even OpenAI, the creator of ChatGPT, acknowledges the limitations of AI. They warn that it “may occasionally generate incorrect information,” “may occasionally produce harmful instructions or biased content,” and has “limited knowledge of the world and events after 2021.”
The last limitation is telling. Is it possible for an AI chatbot to “live in the present?” Today’s reality is so fragmented and dependent on individual perspectives that even humans struggle to identify the truth. How can scientists train an AI model to differentiate truth from falsehood? If this were possible (let alone easy), why haven’t humans mastered
this ability
yet?
I wonder to what extent technological innovation occurs for the sake of innovation itself, rather than to address and solve a specific problem. Are we using AI to solve significant global issues, or are we merely using it to fix inconveniences?
Recently, I read news about a robo-dog for people with visual impairment. Equipped with AI, it talks and aids them in navigating cities. Why were living service dogs not good enough?
Apparently, they are expensive to train and maintain, so the answer technology offered was – what else? – robots. That would be quite reasonable, but 90% of vision loss is preventable or treatable with spectacles or eye surgery. Millions of people have visual impairment because they don’t have access to such treatments, making it a significant and global problem. Shouldn’t we use our resources to prevent and treat vision loss in the first place?
We have built a world so dependent on technology and so obsessed with growth that we are now willing to put a price on the only thing that makes us unique in this world: our brain. While we have not fully comprehended its capabilities, we are trying to make a digital copy of it. Are we sure we know what this means?
Why did AI become racist very fast?
1) The information it used for training was racist.
2) Social network users taught it to be intolerant.
3) It had not been designed properly.
4) It knew little about the current values.
🔗
7)
Прочитайте текст и запишите в поле ответа цифру
1, 2, 3 или 4
, соответствующую выбранному Вами варианту ответа.
Показать текст
My experience with ChatGPT
A few weeks ago, I asked ChatGPT to write an article and I have to say, it exceeded my expectations. Not only did ChatGPT write a comprehensive article, but it also included helpful headlines for each section. Since then, I’ve been thinking a lot about Artificial Intelligence (AI) and what it could mean for the future world.
I’ve been keeping up with news about AI developments, but ChatGPT really stood out to me. While there are other models out there doing similar things, there must be a reason why ChatGPT
made such big headlines
. I think it’s because people like me started using AI models for the first time and got very tangible results. However, as useful and unique it can be, I do have some concerns.
After receiving the article from ChatGPT, I requested another one using similar keywords. ChatGPT delivered, but the resulting article was 62% similar to the first one. I doubt this would happen if I asked two people to write an article using the same keywords. As humans, we all have unique minds and experiences that shape our thoughts and words. Each person’s creativity is unique because it requires unique brain networks to fire simultaneously.
It’s no surprise that ChatGPT lacks originality, since it’s trained on millions of pieces of information from various sources. AI relies on pre-existing information to produce content. In contrast, humans learn in various ways and may draw different conclusions from similar experiences.
Another challenge with AI is intolerance. AI algorithms are only as good as the data they’re trained on, which can lead to intolerant results. Microsoft experienced this firsthand in 2016 when their AI chatbot on social networks became racist and misogynistic within 24 hours.
Even OpenAI, the creator of ChatGPT, acknowledges the limitations of AI. They warn that it “may occasionally generate incorrect information,” “may occasionally produce harmful instructions or biased content,” and has “limited knowledge of the world and events after 2021.”
The last limitation is telling. Is it possible for an AI chatbot to “live in the present?” Today’s reality is so fragmented and dependent on individual perspectives that even humans struggle to identify the truth. How can scientists train an AI model to differentiate truth from falsehood? If this were possible (let alone easy), why haven’t humans mastered
this ability
yet?
I wonder to what extent technological innovation occurs for the sake of innovation itself, rather than to address and solve a specific problem. Are we using AI to solve significant global issues, or are we merely using it to fix inconveniences?
Recently, I read news about a robo-dog for people with visual impairment. Equipped with AI, it talks and aids them in navigating cities. Why were living service dogs not good enough?
Apparently, they are expensive to train and maintain, so the answer technology offered was – what else? – robots. That would be quite reasonable, but 90% of vision loss is preventable or treatable with spectacles or eye surgery. Millions of people have visual impairment because they don’t have access to such treatments, making it a significant and global problem. Shouldn’t we use our resources to prevent and treat vision loss in the first place?
We have built a world so dependent on technology and so obsessed with growth that we are now willing to put a price on the only thing that makes us unique in this world: our brain. While we have not fully comprehended its capabilities, we are trying to make a digital copy of it. Are we sure we know what this means?
What does
“this ability”
in paragraph 7 (“… why haven’t humans mastered this ability yet?”) refer to?
1) To tell which information is true and which is not.
2) To live in the present.
3) To train AI models.
4) To solve specific problems.
🔗
8)
Прочитайте текст и запишите в поле ответа цифру
1, 2, 3 или 4
, соответствующую выбранному Вами варианту ответа.
Показать текст
My experience with ChatGPT
A few weeks ago, I asked ChatGPT to write an article and I have to say, it exceeded my expectations. Not only did ChatGPT write a comprehensive article, but it also included helpful headlines for each section. Since then, I’ve been thinking a lot about Artificial Intelligence (AI) and what it could mean for the future world.
I’ve been keeping up with news about AI developments, but ChatGPT really stood out to me. While there are other models out there doing similar things, there must be a reason why ChatGPT
made such big headlines
. I think it’s because people like me started using AI models for the first time and got very tangible results. However, as useful and unique it can be, I do have some concerns.
After receiving the article from ChatGPT, I requested another one using similar keywords. ChatGPT delivered, but the resulting article was 62% similar to the first one. I doubt this would happen if I asked two people to write an article using the same keywords. As humans, we all have unique minds and experiences that shape our thoughts and words. Each person’s creativity is unique because it requires unique brain networks to fire simultaneously.
It’s no surprise that ChatGPT lacks originality, since it’s trained on millions of pieces of information from various sources. AI relies on pre-existing information to produce content. In contrast, humans learn in various ways and may draw different conclusions from similar experiences.
Another challenge with AI is intolerance. AI algorithms are only as good as the data they’re trained on, which can lead to intolerant results. Microsoft experienced this firsthand in 2016 when their AI chatbot on social networks became racist and misogynistic within 24 hours.
Even OpenAI, the creator of ChatGPT, acknowledges the limitations of AI. They warn that it “may occasionally generate incorrect information,” “may occasionally produce harmful instructions or biased content,” and has “limited knowledge of the world and events after 2021.”
The last limitation is telling. Is it possible for an AI chatbot to “live in the present?” Today’s reality is so fragmented and dependent on individual perspectives that even humans struggle to identify the truth. How can scientists train an AI model to differentiate truth from falsehood? If this were possible (let alone easy), why haven’t humans mastered
this ability
yet?
I wonder to what extent technological innovation occurs for the sake of innovation itself, rather than to address and solve a specific problem. Are we using AI to solve significant global issues, or are we merely using it to fix inconveniences?
Recently, I read news about a robo-dog for people with visual impairment. Equipped with AI, it talks and aids them in navigating cities. Why were living service dogs not good enough?
Apparently, they are expensive to train and maintain, so the answer technology offered was – what else? – robots. That would be quite reasonable, but 90% of vision loss is preventable or treatable with spectacles or eye surgery. Millions of people have visual impairment because they don’t have access to such treatments, making it a significant and global problem. Shouldn’t we use our resources to prevent and treat vision loss in the first place?
We have built a world so dependent on technology and so obsessed with growth that we are now willing to put a price on the only thing that makes us unique in this world: our brain. While we have not fully comprehended its capabilities, we are trying to make a digital copy of it. Are we sure we know what this means?
Which statement is TRUE?
1) Robo-dogs are better than living service dogs.
2) Living dogs are cheaper than robo-dogs.
3) The resources used to develop robo-dogs should be used differently.
4) It is a global problem that many people have issues with eyesight.
🔗
9)
Прочитайте текст и запишите в поле ответа цифру
1, 2, 3 или 4
, соответствующую выбранному Вами варианту ответа.
Показать текст
My experience with ChatGPT
A few weeks ago, I asked ChatGPT to write an article and I have to say, it exceeded my expectations. Not only did ChatGPT write a comprehensive article, but it also included helpful headlines for each section. Since then, I’ve been thinking a lot about Artificial Intelligence (AI) and what it could mean for the future world.
I’ve been keeping up with news about AI developments, but ChatGPT really stood out to me. While there are other models out there doing similar things, there must be a reason why ChatGPT
made such big headlines
. I think it’s because people like me started using AI models for the first time and got very tangible results. However, as useful and unique it can be, I do have some concerns.
After receiving the article from ChatGPT, I requested another one using similar keywords. ChatGPT delivered, but the resulting article was 62% similar to the first one. I doubt this would happen if I asked two people to write an article using the same keywords. As humans, we all have unique minds and experiences that shape our thoughts and words. Each person’s creativity is unique because it requires unique brain networks to fire simultaneously.
It’s no surprise that ChatGPT lacks originality, since it’s trained on millions of pieces of information from various sources. AI relies on pre-existing information to produce content. In contrast, humans learn in various ways and may draw different conclusions from similar experiences.
Another challenge with AI is intolerance. AI algorithms are only as good as the data they’re trained on, which can lead to intolerant results. Microsoft experienced this firsthand in 2016 when their AI chatbot on social networks became racist and misogynistic within 24 hours.
Even OpenAI, the creator of ChatGPT, acknowledges the limitations of AI. They warn that it “may occasionally generate incorrect information,” “may occasionally produce harmful instructions or biased content,” and has “limited knowledge of the world and events after 2021.”
The last limitation is telling. Is it possible for an AI chatbot to “live in the present?” Today’s reality is so fragmented and dependent on individual perspectives that even humans struggle to identify the truth. How can scientists train an AI model to differentiate truth from falsehood? If this were possible (let alone easy), why haven’t humans mastered
this ability
yet?
I wonder to what extent technological innovation occurs for the sake of innovation itself, rather than to address and solve a specific problem. Are we using AI to solve significant global issues, or are we merely using it to fix inconveniences?
Recently, I read news about a robo-dog for people with visual impairment. Equipped with AI, it talks and aids them in navigating cities. Why were living service dogs not good enough?
Apparently, they are expensive to train and maintain, so the answer technology offered was – what else? – robots. That would be quite reasonable, but 90% of vision loss is preventable or treatable with spectacles or eye surgery. Millions of people have visual impairment because they don’t have access to such treatments, making it a significant and global problem. Shouldn’t we use our resources to prevent and treat vision loss in the first place?
We have built a world so dependent on technology and so obsessed with growth that we are now willing to put a price on the only thing that makes us unique in this world: our brain. While we have not fully comprehended its capabilities, we are trying to make a digital copy of it. Are we sure we know what this means?
What is the main idea of the last paragraph?
1) Our society is ready to buy and sell anything.
2) We are too dependent on technology.
3) It is our brain that makes us unique.
4) We should not toy with what we do not fully understand.
🔗