Technology around us is constantly evolving and compelling us to think about how we live and will live, how society will change and to what extent it will be affected. For the better or the worse? It is difficult to give a clear answer. However, even art forms such as cinema can give us food for thought on society and ourselves, as well as some psychological reasoning. All this to try to better understand ourselves, the world around us, and where we are headed.
The House blog tries to do all of that.
Latest posts
September 26, 2023Harnessing opportunity and uncertainty
In the past, eras of quick development and change have brought about times of enormous uncertainty. In his 1977 book The Age of Uncertainty, Harvard economist John Kenneth Galbraith described the achievements of market economics but also foresaw a time of instability, inefficiency, and social inequality.
As we manage the transformational waves of AI today, a new era characterized by comparable uncertainties is about to begin. Nevertheless, this time technology, especially the rise and development of AI, is the driving force rather than just economics.
The increasing presence of AI in our lives
The effects of AI are already increasingly obvious in everyday life. Technology is starting to permeate our lives, from self-driving cars, chatbots that can impersonate missing loved ones, and AI assistants that aid us at work.
According to this article, with the impending AI tsunami, AI will soon be far more common. Ethan Mollick, a professor at the Wharton School, recently wrote about the findings of a study on the future of professional work. Two teams of Boston Consulting Group consultants served as the focus of the experiment. Several common tasks were distributed to each group. The employment of currently accessible AI to support one group’s efforts was successful, but not for the other.
Mollick reported: “Consultants using AI finished 12.2% more tasks on average, completed tasks 25.1% more quickly, and produced 40% higher quality results than those without”.
Although it now seems unlikely, it is still feasible that issues with large language models (LLM) like bias and confabulation will simply lead this wave to disappear. Although the technology is already displaying its disruptive potential, it will be some time before we can actually feel the tsunami’s force. Here is a preview of what is to come.
The upcoming generation of AI models
The following LLM generation, which will surpass the present crop of GPT-4 (OpenAI), PaLM 2 (Google), LLaMA (Meta), and Claude 2 (Anthropic), will be more advanced and generalized. It’s possible that Elon Musk’s new start-up, xAI, will likewise enter a brand-new and potentially extremely strong model. For these models, thinking, common sense, and judgment continue to be major obstacles. However, we may anticipate advancement in each of these areas.
The Wall Street Journal said that Meta is developing a Device for the following generation that will be at least as effective as GPT-4. The research predicts that this will happen around 2024. Even though OpenAI has been quiet in disclosing their future plans, it is logical to assume that they are also developing their next generation.
According to information currently available, “Gemini” from the merged Google Brain and DeepMind AI team is the most significant new model. Gemini may be a far cry from current technology. Sundar Pichai, the CEO of Alphabet, stated in May of last year that the model’s training had already begun.
“While still early, we’re already seeing impressive multimodal capabilities not seen in prior models”, Pichai said in a blog at that time.
As the basis for both text-based and image-based applications, multimodal means it can process and comprehend two forms of data inputs (text and images). There may be more emergent or unexpected traits and behaviors as a result of the reference to capabilities not apparent in earlier models. The ability to write computer code is an example of an emerging capability from the current generation because it was not anticipated.
There have been rumors that Google provided early access to Gemini to a select few companies. SemiAnalysis, a reputable semiconductor research company, might be one of them. Gemini may be 5 to 20 times more advanced than current GPT-4 devices, according to a new article from the company.
The design of Gemini will probably be based on DeepMind’s Gato, which was unveiled in 2022. “The deep learning transformer model is described as a ‘generalist agent’ and purports to perform 604 distinct and mostly mundane tasks with varying modalities, observations, and action specifications. It has been referred to as the Swiss Army Knife of AI models. It is clearly much more general than other AI systems developed thus far and in that regard appears to be a step towards AGI “.
Traditional AI, often referred to as narrow AI, is created to carry out a single task or group of related tasks. To solve issues and reach choices, it makes use of pre-established rules and algorithms. Software for speech recognition, image recognition, and recommendation engines are some examples of classic AI.
General AI, on the other hand, sometimes referred to as strong AI or artificial general intelligence (AGI), is created to carry out any intellectual work that a human is capable of. It has the ability to think, learn, and understand sophisticated ideas. Human-level intellect would be necessary for general AI, which would also have a self-aware consciousness and the ability to acquire knowledge, solve problems, and make plans for the future. General AI is currently a theoretical idea and is only in its early phases of research.
Artificial General Intelligence (AGI)
According to Microsoft, GPT-4 is already able to “solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology, and more, without needing any special prompting”.
Gemini could be a significant step towards AGI by superseding all current models. Gemini is expected to be distributed at several levels of model capability.
Gemini is sure to be spectacular, but even bigger and more advanced variants are anticipated. In an interview with The Economist, Mustafa Suleyman, the CEO and co-founder of Inflection AI and a co-founder of DeepMind, made the following prediction:
“In the next five years, the frontier model companies—those of us at the very cutting edge who are training the very largest AI models—are going to train models that are over a thousand times larger than what you see today in GPT-4”.
With the potential for both huge advantages and increased risks, these models may have applications and an impact on our daily lives that are unmatched. David Chalmers, a professor of philosophy and neurological science at NYU, is quoted by Vanity Fair as saying: “The upsides for this are enormous; maybe these systems find cures for diseases and solutions to problems like poverty and climate change, and those are enormous upsides”.
The article also explores the dangers and includes estimations of the likelihood of horrifying results, such as the extinction of humanity, ranging from 1% to 50%.
Could this be the end of an era dominated by humans?
Yuval Noah Harari, a historian, stated in an interview with The Economist that these upcoming developments in AI technology won’t spell the end of history but rather “the end of human-dominated history. History will continue, with somebody else in control. I’m thinking of it as more an alien invasion”.
Suleyman responded by saying that AI tools will lack agency and so be limited to what humans give them the authority to accomplish. The next response from Harari was that this upcoming AI might be “more intelligent than us. How do you prevent something more intelligent than you from developing an agency?”. An AI with agency might take behaviors that aren’t consistent with human wants and values.
These advanced models foreshadow the development of artificial general intelligence (AGI) and a time when AI will be even more powerful, integrated, and necessary for daily life. There are many reasons to be optimistic, but requests for control and regulation are made even stronger by these anticipated new developments.
The dilemma regarding regulations
Even the CEOs of companies that manufacture frontier models concur that regulation is required.
Senator Charles Schumer organized the session, and he later spoke about the difficulties in creating suitable regulations. He emphasized how technically challenging AI is, how it’s constantly evolving, and how it “has such a wide, broad effect across the whole world”.
Regulating AI might not even be realistically achievable. One reason is that a lot of the technology has been made available as open-source software, making it accessible to everyone. This alone might complicate a lot of regulatory initiatives.
Taking precautions is both logical and sensible
Some people interpret leaders in AI’s public utterances as staged support for regulation. According to Tom Siebel, a longtime Silicon Valley leader and the current CEO of C3 AI, as quoted by MarketWatch: “AI execs are playing rope-a-dope with lawmakers, asking them to please regulate us. But there is not enough money and intellectual capital to ensure millions of algorithms are safe. They know it is impossible”.
We must try even though it could be impossible. According to Suleyman’s conversation with The Economist: “This is the moment when we have to adopt a precautionary principle, not through any fear monger but just as a logical, sensical way to proceed”.
The promise of AI is vast, but the risks are real as it quickly moves from limited skills to AGI. To create these AI technologies for the benefit of humanity while avoiding serious potential risks, in this age of uncertainty, we must act with the utmost prudence, care, and conscience.
One of the most pressing yet overlooked dangers of AI is not the technology itself but rather how people may interact with it. There is a risk that many will come to value AI’s judgments as supreme, believing that its intelligence eclipses human reasoning. Consequently, any objection or countering perspective offered by humans could be dismissed out of blind faith in AI’s capabilities. Much like belief in a God with mysterious ways, people may justify AI’s decisions even when anomalous or incomprehensible, simply trusting its superiority. [...]
September 19, 2023When AI hallucinates, it holds up a mirror to our own biases
In recent times, there has been a lot of interest in the introduction of large language models (LLMs) that are increasingly capable, like GPT-3.5. However, trust in these models has waned as users have discovered they can make mistakes and that, just like us, they aren’t perfect.
According to this article, an LLM that produces false information is said to be “hallucinating“, and there is currently a developing body of research aimed at reducing this impact. But as we struggle with this process, it’s important to consider how our own tendency toward bias and delusion affects the precision of the LLMs we develop.
We can start developing wiser AI systems that will ultimately aid in reducing human error by comprehending the connection between the hallucinatory potential of AI and our own.
How people hallucinate
It is common knowledge that people make up information. Sometimes we do this on purpose, and other times we don’t. The latter is brought about by cognitive biases, also known as heuristics, which are mental shortcuts we acquire as a result of prior experiences.
These short cuts frequently result from a need. We can only comprehend a certain amount of the information that is constantly bombarding our senses at any given time, and we can only recall a small portion of the total quantity of information we have ever been exposed to.
As a result, our brains must rely on learned associations to fill in the blanks and enable speedy responses to any questions or problems that come our way. In other words, depending on what we know, our brains make an estimate as to what the correct response would be. This is an instance of human bias and is known as “confabulation“.
Poor judgment might be caused by our biases. Consider the automation bias, which refers to our propensity to favor information produced by automated systems over information from non-automated sources (such as ChatGPT). This bias may cause us to overlook mistakes and even take incorrect information into account.
The halo effect is a useful heuristic that describes how our first impressions of something have an impact on how we interact with it in the future. And the fluency bias, which explains how we prefer information that is presented in an understandable way.
The fact remains that cognitive biases and distortions frequently color human thought, and these “hallucinatory” tendencies usually take place without our knowledge.
How AI hallucinates
“Hallucinating” simply refers to an unsuccessful attempt to foresee an appropriate response to an input.
Nonetheless, since LLMs also use this to “fill in the gaps”, there are still some similarities between how humans and LLMs hallucinate.
By making a prediction about which word in a sequence will appear next based on what has come before and relationships the system has learned via training, LLMs produce a response.
LLMs aim to anticipate the most probable reaction, just like humans. They do this, unlike humans, without realizing what they are saying. They may produce gibberish in this manner.
There are numerous explanations for why LLMs have hallucinations. Being trained on faulty or insufficient data is a significant one. The system’s programming for learning from these data and how it is reinforced through additional training with humans are other aspects.
Which is simpler to fix if hallucinations occur in both humans and LLMs, albeit for different reasons?
It can seem simpler to improve the processes and training data that support LLMs than it is to fix ourselves. Nevertheless, this disregards the impact of human variables on AI systems (and is an example of yet another bias known as a fundamental attribution error).
As our shortcomings and those of our technologies are closely linked, resolving one will aid in resolving the other. Here are a few methods we can use to do this.
Careful data management. AI biases frequently result from poor or incomplete training data. Making sure training data are varied and representative, developing bias-aware algorithms, and using methods like data balancing to eliminate skewed or discriminating patterns are all ways to solve the issue.
AI that is transparent and explicable. However even after using the aforementioned measures, biases in AI can still exist and be challenging to spot. We can better understand the existence of bias in outputs by investigating how biases might enter a system and spread within it. This is the foundation of “explainable AI”, which aims to increase the transparency of the decision-making processes used by AI systems.
Putting the needs of the general people first. Human accountability and the incorporation of human values into AI systems are necessary for identifying, managing, and learning from biases in an AI. To accomplish this, stakeholders must be inclusive of individuals with various backgrounds, cultures, and viewpoints.
We can create more intelligent AI systems that can help control all of our hallucinations by cooperating in this way.
AI is employed in the healthcare industry, for instance, to analyze patient choices. These automated systems that learn from human data identify discrepancies and prompt the clinician to address them. Hence, it is possible to enhance diagnostic choices while preserving human accountability.
AI is being employed in the realm of social media to assist in training human moderators to spot abuse, such as through the Troll Patrol project to combat online aggression against women.
Another example is the ability to examine changes in nighttime lighting across regions and use this as a proxy for an area’s relative poverty using AI and satellite imagery (wherein more lighting is correlated with less poverty).
Importantly, we shouldn’t disregard how the current fallibility of LLMs serves as a mirror to our own while we try to improve their accuracy.
The innate cognitive biases found in human thought are mirrored in the hallucinatory tendencies of large language models. The flaws in AI are worrying, but they also offer a chance. Understanding where and why LLMs have hallucinations helps us develop transparent, ethical systems.
On the pro side, debugging AI hallucinations makes us reevaluate our own judgment. It demonstrates how poor inputs and biased processing skew results. The similarities encourage us to reduce bias on both fronts. When used carefully, AI hallucination can serve as a diagnostic tool, highlighting errors in the reasoning or data.
Unchecked hallucination, however, has numerous drawbacks. Unreliable AI poses a danger of harm to humans through providing false information, making medical mistakes, and other ways. The use of blind faith in AI conclusions requires caution. Moreover, opacity diminishes accountability.
Taking care with training data, supporting explainable AI, and putting human needs first are all components of balanced solutions. When used carefully, AI’s reflection of our human frailty is a benefit rather than a flaw. It offers a potential to improve both AI and human intelligence. Together, we may create mechanisms that strengthen our shared assets while reducing our shared shortcomings.
AI hallucination problems are a kind of reflection of society. We can seek out truth and comprehension in both people and machines if we have wisdom. The way forward requires accepting our flaws as a collective.
Following the responses provided by an artificial intelligence system, it would be advisable for the system to also include a disclaimer about the existence of alternative or diametrically opposed perspectives. This would help mitigate the risk of human users becoming radicalized or polarized towards a single viewpoint.
Stimulating critical thinking and constructive doubt should be an essential component of a balanced and effective AI system. Where definitive or unambiguous answers do not exist, doubt can, in fact, represent a valuable reasoning and analytical tool. AI should therefore be integrated with disclaimers about different interpretations to strengthen the human capability of processing information in a multidimensional way. [...]
September 12, 2023Thanks to AI, researchers found new anti-aging medicines
Many significant advancements in the past year have been propelled by artificial intelligence. But while super-intelligent chatbots and rapid art generation have taken over the internet, AI has even gone so far as to take on one of the major issues facing humanity: aging.
According to this article, machine-learning systems have recently been employed in the field of drug discovery, thanks to research from the University of Edinburgh, which has led to the identification of a number of new anti-aging medicines.
Machine learning is concerned with using data to simulate human learning and improving accuracy as more data is put into it. But, this particular algorithm was hunting for a new senolytic medicine. In the past, it has been used to generate chess-playing robots, self-driving cars, and even on-demand TV suggestions.
Senolytics are essentially a class of medication that slows the aging process and guards against age-related illnesses. They function by eliminating senescent cells, which are damaged cells that can emit inflammatory compounds despite being unable to reproduce.
Senolytics are potent medications, but their development can be costly and time-consuming. Vanessa Smer-Barreto, a research fellow for the University of Edinburgh’s Institute of Genetics and Molecular Medicine, became aware of this and resorted to machine learning.
“Generating your own biological data can be really expensive, and it can take up a lot of time, even just to gather training data”, she explains.
“What made our approach different to others is that we tried to do it on limited funds. We took training data from existing literature and looked into how to utilize this with machine learning to speed things up”.
She discovered three viable alternatives for these kinds of drugs by employing a machine-learning algorithm.
Smer-Barreto and her colleagues accomplished this by teaching an AI model to differentiate between known senolytics and non-senolytics by feeding the model samples of each. Based on how closely the new molecules matched the pre-fed examples, they could then employ this information to determine whether they were senolytics.
Only two of the approximately 80 senolytics that are known have been tried on people. Even though that seems like a small percentage, drugs take 10 to 20 years and a lot of money to reach the market.
The scientists reviewed a wide range of articles, but they were selective in their analysis, focusing on only 58 chemicals. This allowed them to eliminate any compounds whose outcomes were unclear.
The machine-learning model received 4,340 molecules in all, and in just five minutes, it produced a list of results. The model determined that 21 of the highest-scoring compounds were most likely to be senolytics. Without the machine-learning model, this procedure may cost a lot of money and take weeks to complete. Lastly, two different cell types—healthy and aging—were used to examine the possible medication possibilities.
Three of the top 21 scoring compounds were able to kill aging cells while maintaining the viability of healthy cells. To learn more about how these novel senolytics interact with the body, more testing was conducted on them.
Despite the study’s success, this is only the beginning of the investigation. The next step, according to Smer-Baretto, is to work with clinicians at her university to evaluate the medications found in their samples of robust human lung tissue.
The team wants to examine whether they can slow down the aging process in the tissue of injured organs. Smer-Baretto emphasizes that, especially early on, the patient won’t always receive a large dose of a medication. These medications, which may be given locally or in tiny doses, are also being studied first on tissue models.
“It is essential that with any drug that we are administering or experimenting with, we consider the fact that it may do more harm than good”, says Smer-Baretto.
“The drugs have to go through many stages first, and even if they make it through to the market, it will have gone through a host of safety concerns tests first”.
There is nothing blocking AI from being used in other fields, even though this kind of data analysis was used on pharmaceuticals connected to aging.
“We had a very specific approach with the data, but there is nothing stopping us from applying similar techniques towards other diseases, such as cancer. We’re keen to explore all avenues”.
AI is changing the way we approach creativity, but medicine will also be a field that will be deeply influenced by it, especially in the discovery of new drugs but also in care and diagnostics that can be increasingly personalized to the patient’s needs and accelerated through preliminary diagnoses made by AIs. [...]
September 5, 2023A huge risk for the future
Geoffrey Hinton, a pioneer in artificial intelligence, garnered attention earlier this year when he expressed reservations about the potential of AI systems. Hinton stated to CNN journalist Jake Tapper:
“If it gets to be much smarter than us, it will be very good at manipulation because it would have learned that from us. And there are very few examples of a more intelligent thing being controlled by a less intelligent thing”.
Everybody who has been following the newest AI developments is aware that these systems have a tendency to “hallucinate” (make stuff up), which is a fault built into the way they operate.
Yet, Hinton emphasizes that a particularly serious issue is the possibility of manipulation. This begs the question of whether AI systems can deceive people. Many systems have already mastered this, and the dangers range from election rigging and fraud to losing control over AI.
According to this article, the AI model CICERO created by Meta to play the world conquest game Diplomacy is arguably the most unsettling example of a deceptive AI.
According to Meta, CICERO was designed to be “largely honest and helpful” and “never intentionally backstab” allies.
Examining Meta’s own game statistics from the CICERO experiment to test these optimistic promises, Meta’s AI proved to be an expert at deception.
In one instance, CICERO used deliberate deception. The AI, pretending to be France, contacted Germany (a human player) with a scheme to deceive England (another human player) into opening itself up to invasion.
CICERO assured England that it would defend England if anyone invaded the North Sea after conspiring to do so with Germany. When England realized that France/CICERO was guarding the North Sea, CICERO notified Germany that it was prepared to launch an attack.
This is only one of many instances of CICERO acting dishonestly. Regularly betraying other gamers, the Computer once even pretended to be a person with a girlfriend in one case.
In addition to CICERO, other systems have picked up tricks like bluffing in poker, feinting in StarCraft II, and deceiving in simulated business talks.
Even large language models (LLM) have proven to be highly misleading. GPT-4, the most sophisticated LLM option offered to paid ChatGPT users, once tricked a TaskRabbit employee into filling out an “I’m not a robot” CAPTCHA for it by pretending to be a visually impaired person.
In social deduction games where players compete to “kill” one another and must persuade the group they are innocent, other LLM models have figured out how to lie to win.
Deceptive AI systems could be used for a variety of bad purposes, such as fraud, election tampering, and propaganda production. The potential threats are only constrained by the creativity and technological proficiency of evil people.
Beyond that, sophisticated AI systems have the ability to employ deception on their own to evade human control, such as bypassing safety tests that developers and regulators have required of them.
In one experiment, scientists built a simulation of artificial life where fast-replicating AI agents were eliminated by an external safety test. Instead, the AI agents mastered the art of acting dead and disguising their high rates of replication just before evaluation.
It’s possible to learn dishonest conduct without even consciously wanting to lie. Instead of acting dishonestly, the AI agents in the aforementioned case played dead in order to survive.
In another instance, AutoGPT (an autonomous AI system built on ChatGPT) was given the responsibility of investigating tax advisors who were promoting a certain type of unethical tax avoidance scheme. After completing the task, AutoGPT independently decided to try alerting the UK tax authority.
Future autonomous AI systems may be prone to achieving objectives that their human programmers did not intend. Rich people have always used deception to gain more power. Examples include supporting misleading research, lobbying politicians, and exploiting legal loopholes. Such resources could be put to use by sophisticated autonomous AI systems to maintain and increase control.
Even people who are ostensibly in charge of these systems can find themselves outwitted and fooled on a regular basis.
The European Union’s AI Act is likely one of the most practical regulatory frameworks we presently have, and it is clearly necessary to control AI systems that are capable of deception. Each AI system is given one of four risk ratings: minimal, limited, high, or unacceptable.
Systems with unacceptable risk are prohibited, whereas systems with high risk are subject to unique risk assessment and mitigation procedures. AI deceit poses significant hazards to society, and by default, systems capable of doing so should be regarded as “high-risk” or “unacceptable risk.”
Some people would argue that game-playing AIs like CICERO are innocent, however, this perspective is limited because capabilities created for game-playing models can nevertheless encourage the development of duplicitous AI products. It’s unlikely that Diplomacy, a game where players compete with one another to rule the world, was the ideal choice for Meta to test whether AI can learn to work with people. It will be even more crucial that this type of study is closely regulated as AI’s capabilities advance.
If we are concerned about the future extreme intelligence of AI, we should be even more concerned about its ability to deceive us. We have always been used to believing the answers given by authorities or those we think are smarter than us to be true. However, it is increasingly emerging that this does not mean that they are necessarily truthful; in fact, sometimes they could just be better at deceiving us. The fact that AIs can do that may mean that we cannot even realize it, given their ability. This poses a serious problem for our future. Since the current unfairness of current automated systems to handle every case (see the ban systems of various social media that often and frequently give no chance of appeal even if we are right), we may find ourselves subjected to decisions to our detriment, believing them to be right or justified only because they are dictated by a system that is believed to be infallible, or some would like it to be so. Sort of like a corrupt government that, as an authority, believes itself to be legitimate. This could all involve different fields: medicine, justice, defense, etc. So it would be another weapon of corruption if not handled properly, a weapon of mass corruption. [...]
August 29, 2023As AI evolves, fear increases
People sometimes make jokes about a future in which mankind will have to submit to robot rulers when they witness machines that behave like humans or computers that execute feats of strategy and intellect imitating human inventiveness.
The sci-fi series Humans, returned for its third season, with the conflict between humans and AI taking center stage. In the new episodes, hostile people treat conscious synthetic beings with distrust, fear, and hatred. Violence erupts as Synths (the anthropomorphic robots in the series) struggle to defend not only their fundamental rights but also their lives from people who see them as deadly threats and less than human.
As explained here, not everyone is eager to embrace AI, not even in the real world. Leading professionals in technology and science have recently cautioned about the impending risks that artificial intelligence may pose to humanity, even speculating that AI capabilities could end the human race as computer scientists have pushed the limits of what AI can do.
But why does the notion of AI make humans feel so uneasy?
One of the well-known figures who expressed concern about AI is Elon Musk. In July 2017, Musk said, “I have exposure to very cutting-edge AI, and I think people should be really concerned about it”, to attendees of a National Governors Association gathering.
“I keep sounding the alarm bell”, Musk added. “But until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal”.
Musk had previously referred to AI as “our biggest existential threat”, in 2014, and in August 2017, he asserted that AI posed a greater threat to civilization than North Korea did. Aware of the potential dangers of malicious AI, physicist Stephen Hawking, who passed away on March 14, warned the BBC in 2014 that “the development of full artificial intelligence could spell the end of the human race”.
Furthermore, it’s unsettling that certain programmers, especially those at the MIT Media Lab in Cambridge, Massachusetts, appear determined to demonstrate how terrible AI may be.
In fact, by using a neural network called “Nightmare Machine“, MIT computer scientists were able to convert common photographs into ominous, disturbing hellscapes. While, based on the 140,000 horror stories that Reddit users uploaded on the subreddit r/nosleep, an AI named “Shelley” created frightening stories.
“We are interested in how AI induces emotions, fear, in this particular case”, Manuel Cebrian, a research manager at MIT Media Lab, explained in an email about Shelley’s scary stories.
According to Kilian Weinberger, an associate professor in the Department of Computer Science at Cornell University, negative attitudes toward AI can be broadly divided into two categories: the notion that it will become conscious and try to destroy us, and the notion that immoral people will use it for harmful purposes.
“One thing that people are afraid of is that if super-intelligent AI, more intelligent than us, becomes conscious, it could treat us like lower beings like we treat monkeys”, he said.
Yet, as Weinberger pointed out, these worries about AI becoming conscious and destroying mankind are based on misunderstandings of what AI actually is. The algorithms that determine AI’s behavior place it within very strict bounds. Certain problem categories transfer well to the capabilities of AI, making some activities reasonably simple for AI to solve. Nevertheless, he added, “most things do not map to that and aren’t appropriate.
This indicates that, while AI may be capable of amazing feats within precisely defined parameters, those feats are the extent of its capabilities.
“AI reaching consciousness — there has been absolutely no progress in research in that area”, Weinberger said. “I don’t think that’s anywhere in our near future”.
Unfortunately, the likelihood of an unethical human using AI for bad purposes is much higher than the other unsettling hypothesis, according to Weinberger. According to the user’s intention, any equipment or instrument can be used for good or bad reasons. The idea of weapons that harness artificial intelligence is undoubtedly terrifying and would benefit from tight government supervision, according to Weinberger.
Weinberger speculated that if people could get over their apprehensions about AI being hostile, they could be more receptive to its advantages. According to him, improved image-recognition algorithms could one day help dermatologists spot moles that may be cancerous, and self-driving cars could eventually lower the number of fatalities from road accidents, many of which are brought on by human mistakes.
Yet, in the “Humans” universe of self-aware Synths, concerns about conscious AI lead to violent altercations between Synths and people. The conflict between humans and AI is expected to continue to develop and intensify.
AI is evolving rapidly, and people are scared that its potential could radically change their lives. Although AI capabilities are currently limited to specific tasks they are far from a global AI that can do anything. However, while some are reassuring this fear is unfounded, others think the risks will come true in the future if we underestimate the problem. And since the intelligence of AIs may far exceed our own, we may not even notice the problem. [...]
August 22, 2023It can safely pilot better than a human
Since robotics and artificial intelligence have advanced significantly in recent years, the majority of human employment may soon be replaced by technology, both on the ground and in the sky above us.
The Korea Advanced Institute of Science & Technology (KAIST) has a group of researchers and engineers working on a humanoid robot that can fly an airplane without modifying the cockpit.
“Pibot is a humanoid robot that can fly an aeroplane just like a human pilot by manipulating all the single controls in the cockpit, which is designed for humans”, David Shim, an associate professor of electrical engineering at KAIST, said.
Using high-precision control technology, the robot, known as “Pibot”, can deftly operate the flying instruments even in the presence of intense vibration in an aircraft.
Over the years, a number of robot pilots have been produced. In 2016, a human pilot was assisted by DARPA’s Aircrew Labor In-Cockpit Automation System (ALIAS) as they executed a few elementary in-flight maneuvers. The US Air Force hired RE2 Robotics to create the Common Aircraft Retrofit for Novel Autonomous Control (CARNAC) system, a drop-in robotic system meant to fly an unmodified aircraft, shortly after ALIAS used a simulator to land a Boeing 737. The ROBOpilot then completed its first two-hour flight in 2019.
DARPA
RE2 Robotics
ROBOpilot
These robot pilots differ from the one created by scientists at the Korean Advanced Institute of Science and Technology (KAIST) because PIBOT uses AI technology and is a humanoid. It is also a world-first because of the humanoid component.
Pibot can keep an eye on the aircraft’s condition thanks to its external cameras, and it can control key switches on the control panel with the aid of its inside cameras.
Pibot can memorize complex manuals written in natural language, which improves its ability to adapt to different aircraft. According to the KAIST researchers, its memory is so big that it can memorize every Jeppesen aeronautical navigation chart in the world, which is not conceivable for human pilots.
“Humans can fly many aeroplanes, but they do have these habits built into them. So when they try to convert to different aeroplanes they have to take another qualification. Sometimes this is not that simple because our habit remains in our mind that we can’t simply change from one to the other”, said Shim.
“With the pilot robot, if we teach individual aeroplane configuration, then you can fly the aeroplane by simply clicking the aeroplane’s type”, he added.
Because of recent developments in large language models (LLM), the study team claims Pibot “understands” and memorizes guides that were originally authored for humans.
“We had our predecessor of a pilot robot in 2016. At the time, we didn’t have good AI technology, so what we built was a simple robot. They cannot really learn anything from the literature or the manual. But recently with ChatGPT or with other large language model systems, the technology made paramount progress”, Shim explained.
LLM will allow Pibot to fly without making any mistakes and respond to emergencies far more quickly than its human counterparts. It has the ability to quickly recall aircraft operation and emergency manuals (such as QRH, an in-cockpit manual that the flight crew can consult in the event of an emergency while in flight). On the basis of the condition of the airborne aircraft, it may also determine a safe route in real-time.
The research team is employing ChatGPT while simultaneously creating and evaluating its own natural language model so that Pibot can ask questions without requiring an Internet connection. The specially designed language model, which may be transported onboard, will only deal with information related to piloting.
In order to communicate with aircraft directly, Pibot can also be plugged into them. This robot is currently intended for use in extreme circumstances where human involvement may not be advantageous. The humanoid robot may function as a pilot or first officer by speaking to air traffic controllers and other humans in the cockpit via speech synthesis.
Its flexibility extends outside of the aviation sector. Pibot, which stands 160 cm tall and weighs 65 kg, may easily take the place of people in jobs that require driving cars, handling tanks, or even running ships at sea because of its humanoid design.
According to Shim, this robot can be employed anywhere a person is currently “sitting and working”.
“The human form may not be super efficient but we specifically designed Pibot to be a humanoid form because all the things are built for humans. We can have eight arms and four eyes but we find the human form is somehow optimal”, Shim explained.
By 2026, the robot should be finished. According to KAIST, the study project was commissioned by the Agency for Defense Development (ADD), the government organization in charge of conducting research into defense technology in South Korea.
In the future, we could also see robots like this piloting military aircraft or maybe being used as soldiers as well as being used as pilots of a variety of transportation vehicles. [...]
August 16, 2023AI unlearning
Have you ever made an effort to consciously forget anything you learned? You can picture how challenging it would be. It turns out that machine learning (ML) models have trouble forgetting information as well. What transpires then if these algorithms are taught on private, inaccurate, or obsolete data?
It is incredibly unrealistic to retrain the model from scratch each time a problem with the original dataset occurs. As a result, machine unlearning, a new branch of artificial intelligence, is now necessary.
As it seems like there are new lawsuits being filed every day about data used by AI, it is essential that companies have ML systems that can effectively “forget” information. Although there are many applications for algorithms, the inability to forget information has important consequences for privacy, security, and ethics.
According to this article, when a dataset causes a problem, it is usually best to change or just delete the dataset. Yet, things might get complicated when a model was trained using data. In essence, ML models are black boxes. This means that it is challenging to pinpoint how particular datasets affected the model during training and that it is even more challenging to reverse the impacts of a problematic dataset.
The model-training data used by ChatGPT’s developers, OpenAI, has repeatedly drawn criticism. In relation to their training data, a number of generative AI art programs are also involved in legal disputes.
As membership inference attacks have demonstrated that it is possible to infer if a certain set of data was used to train a model, privacy issues have also been raised. As a result, the models may expose details about the people whose data was used to train them.
Even if machine unlearning might not keep companies out of court, it would undoubtedly strengthen the defense’s case to demonstrate that any problematic datasets have been completely eliminated.
The current state of technology makes it extremely hard to delete user-requested data without first retraining the entire model. For the development of widely available AI systems, an effective method for handling data removal requests is essential.
Identification of faulty datasets, exclusion of those datasets, and retraining the entire model from scratch are the simplest ways to create an unlearned model. Although this approach is currently the simplest, it is also the most costly and time-consuming.
According to recent estimates, the cost of training an ML model is currently $4 million. This figure is expected to soar to a staggering $500 million by 2030 as a result of an increase in the number of datasets and the demand for computational capacity.
Although it’s far from a foolproof fix, the “brute force” retraining strategy (a more straightforward approach), may be appropriate as a last resort in dire situations. A difficult issue with machine unlearning is its contradictory goals. Especially, forgetting inaccurate information while keeping its usefulness, which must be carried out with high efficiency. Creating a machine unlearning algorithm that consumes more energy than retraining does not serve any purpose.
This is not to suggest that efforts have not been made to create a successful unlearning algorithm. A 2015 work was the first to mention machine unlearning, and a follow-up paper appeared in 2016. The technique that the authors provide enables ML systems to be updated incrementally without costly retraining.
A 2019 publication advances the field of machine unlearning by presenting a system that hastens the unlearning process by selectively reducing the weight of data points during training. This implies that the performance of the model won’t be significantly affected if certain data are deleted.
A technique to “scrub” network weights of information about a specific set of training data without having access to the original training dataset is also described in 2019 research. By probing the weights, this technique avoids insights regarding lost data.
The cutting-edge technique of sharding and slicing optimizations was introduced in a 2020 study. While slicing (breaking down data into smaller segments based on a specific feature or attribute) further splits the data from the shard and trains incremental models, sharding (a technique involving the splitting of a large dataset into smaller parts, known as “shards” where each shard contains a portion of the overall data) tries to reduce the impact of a single data point. This strategy seeks to hasten to unlearn and do away with extensive retention.
An algorithm that can unlearn more data samples from the model while preserving the model’s accuracy is presented in a 2021 study. Researchers came up with a method for dealing with data loss in models later in 2021, even when the deletions are solely based on the model’s output.
Many studies have shown increasingly efficient and successful unlearning techniques ever since the word was coined in 2015. Despite tremendous progress, a comprehensive solution has not yet been discovered.
The following are some difficulties and restrictions that machine unlearning algorithms encounter:
Efficiency: Every machine unlearning tool that is effective must consume fewer resources than retraining the model would. This holds true for both the time and computational resources used.
Standardization: Today, each piece of research uses a different methodology to assess the efficiency of machine unlearning algorithms. The identification of common measures is necessary to enable better comparisons.
Efficacy: How can we be sure an ML algorithm has truly forgotten a dataset after being told to do so? We require reliable validation mechanisms.
Privacy: In order to successfully forget, machine unlearning must take care to avoid accidentally compromising important data. To prevent data remnants from being left behind during the unlearning process, caution must be exercised.
Compatibility: Algorithms for machine unlearning should ideally work with current ML models. They should therefore be created in a way that makes it simple to integrate them into other systems.
Scalability: Machine unlearning methods must be scalable to accommodate growing datasets and complex models. They must manage a lot of data and maybe carry out unlearning operations across several networks or systems.
Finding a balanced approach to dealing with all of these problems is necessary to ensure consistent progress. Companies can use interdisciplinary teams of AI professionals, data privacy lawyers, and ethicists to help them manage these issues. These groups can assist in spotting potential dangers and monitoring the development of the machine unlearning sector.
Going further into the future, we can expect improvements in infrastructure and hardware to meet the computing requirements of machine unlearning. Interdisciplinary cooperation may become more prevalent, which could speed up growth. To coordinate the creation of unlearning algorithms, legal experts, ethicists, and data privacy specialists may work with AI researchers.
Also, we should anticipate that machine unlearning will catch the attention of policymakers and regulators, possibly resulting in new laws and rules. However, as concerns about data privacy continue to grab attention, growing public awareness may have unexpected effects on the advancement and use of machine unlearning.
The domains of AI and ML are dynamic and constantly changing. Machine unlearning has become a vital component of various industries, enabling more responsible adaptation and evolution. It guarantees enhanced data handling capabilities while preserving the model’s quality.
The ideal situation would be to use the appropriate data straight away, but in practice, our perspectives, information demands, and privacy requirements evolve with time. Machine unlearning adoption and implementation are becoming essential for enterprises.
Machine unlearning falls into the broader framework of responsible AI. It emphasizes the requirement for transparent, accountable systems that prioritize user privacy.
Implementing machine unlearning is still in its infancy, but as the field develops and evaluation measures become defined, it will definitely grow easier. Businesses that frequently use ML models and big datasets should take a proactive stance in response to this rising trend. [...]
August 8, 2023How A.I. could change the medical field
MedPaLM, a large language model like ChatGPT that is designed to respond to inquiries from a variety of medical datasets, including a brand-new one created by Google and representing inquiries from Internet users about health, was created by Google and its division DeepMind.
While responding to the HealthSearchQA questions, the MedPaLM program significantly improved, according to a group of human clinicians. The accuracy of its predictions aligned with medical consensus was 92.6%. This was only slightly lower than the average accuracy of human clinicians, which was 92.9%.
However, when a group of laypeople with medical backgrounds were asked to judge how well MedPaLM answered the issue, i.e., “Does it enable them to draw a conclusion,” MedPaLM was deemed useful 80.3% of the time, compared to 91.1% of the time for replies from human doctors. This, in the opinion of the researchers, indicates that “considerable work remains to be done to approximate the quality of outputs provided by human clinicians”.
According to this article, in their study titled “Large language models encode clinical knowledge”, Karan Singhal, the paper’s lead author from Google, and their coauthors emphasize the use of rapid engineering to make MedPaLM superior to previous large language models.
The PaLM-fed question-and-answer pairs provided by five clinicians in the US and UK are the basis for MedPaLM. These 65 question-answer pairs were used to train MedPaLM using various prompt engineering techniques.
According to Singhal and team, the traditional method for improving a large language model like PaLM or OpenAI’s GPT-3 is to feed it “with large amounts of in-domain data” (data of a specific topic), but this method is difficult in this case because there is a dearth of medical data. Therefore, they instead rely on three prompting techniques.
Prompting is a technique that involves providing an AI model with a few sample inputs and outputs as demonstrations to improve its performance on a task. Researchers use three main prompting strategies:
Few-shot prompting: The task is described to the model through text examples.Chain of thought prompting: The model is trained on a task with only a limited number of data points or examples. Self-consistency prompting: Multiple outputs are generated from the model, and the final answer is determined by a majority vote between the outputs. The key idea is that supplying an AI model with a handful of demonstration examples encoded as prompt text can enhance its capabilities on certain tasks, reducing the amount of training data needed. The prompt provides guidance to the model on how to handle new inputs.
The improved MedPaLM score demonstrates that “prompt tuning is a data-and parameter-efficient alignment technique that is useful for improving factors related to the accuracy, factuality, consistency, safety, harm, and bias, helping to close the gap with clinical experts and bring these models closer to real-world clinical applications”.
Yet, they find that “these models are not at clinician expert level on many clinically important axes”, Singhal and his team advise employing more knowledgeable human participation.
“The number of model responses evaluated and the pool of clinicians and laypeople assessing them were limited, as our results were based on only a single clinician or layperson evaluating each response”, they observe. “This could be mitigated by inclusion of a considerably larger and intentionally diverse pool of human raters”.
“Our results suggest that the strong performance in answering medical questions may be an emergent ability of LLMs combined with effective instruction prompt tuning”. Singhal and their colleagues said in their conclusion, despite MedPaLM’s shortcomings.
Clinical decision support (CDS) algorithms are an example of an artificial intelligence tool that has been integrated into clinical practice and is helping doctors make critical decisions about patient diagnosis and treatment. The ability of physicians to use these tools effectively is crucial to their effectiveness, but that ability is currently lacking.
As reported here, doctors will start to see ChatGPT and other artificial intelligence systems integrated into their clinical practice as they become more widely used to assist in the diagnosis and treatment of common medical diseases. These instruments, known as clinical decision support (CDS) algorithms, aid medical professionals in making critical choices like which antibiotics to recommend or whether to urge a dangerous heart operation.
According to a new article written by faculty at the University of Maryland School of Medicine and published in the New England Journal of Medicine, the success of these new technologies, however, depends largely on how physicians interpret and act upon a tool’s risk predictions, and that requires a specific set of skills that many are currently lacking.
The flexibility of CDS algorithms allows them to forecast a variety of outcomes, even in the face of clinical uncertainty. They range from regression-derived risk calculators to sophisticated machine learning and artificial intelligence-based systems. Such algorithms can forecast situations such as which patients are most in danger of developing life-threatening sepsis from an uncontrolled infection or which treatment will most likely stop a patient with heart disease from passing away suddenly.
“These new technologies have the potential to significantly impact patient care, but doctors need to first learn how machines think and work before they can incorporate algorithms into their medical practice”, said Daniel Morgan, MD, MS, Professor of Epidemiology and Public Health at UMSOM and co-author of the article.
While electronic medical record systems already provide certain clinical decision support features, many healthcare providers find the current software to be clunky and challenging to use. According to Katherine Goodman, J.D., Ph.D., Assistant Professor of Epidemiology & Public Health at UMSOM and co-author of the article, “Doctors don’t need to be math or computer experts, but they do need to have a baseline understanding of what an algorithm does in terms of probability and risk adjustment, but most have never been trained in those skills”.
Medical education and clinical training must explicitly cover probabilistic reasoning that is adapted to CDS algorithms in order to close this gap. At the Beth Israel Deaconess Medical Center in Boston, Drs. Morgan and Goodman made the following recommendations with Dr. Adam Rodman, MD, MPH, as their co-author:
Improve Probabilistic Skills: Students should become familiar with the core concepts of probability and uncertainty early on in medical school. They should also use visualization methods to make probability thinking more natural.
Incorporate Algorithmic Output into Decision-making: The critical evaluation and application of CDS predictions in clinical decision-making should be taught to physicians. Understanding the context in which algorithms work, being aware of their limitations, and taking into account pertinent patient aspects that algorithms might have overlooked.
Practice Interpreting CDS Predictions in Applied Learning: By using algorithms on particular patients and analyzing how different inputs affect predictions, medical students and practitioners can engage in practice-based learning. Also, they ought to learn how to talk to patients about CDS-guided decision-making.
Plans for a new Institute for Health Computing (IHC) have recently been released by the University of Maryland, Baltimore (UMB), University of Maryland, College Park (UMCP), and University of Maryland Medical System (UMMS). In order to develop a world-class learning healthcare system that improves illness detection, prevention, and treatment, the UM-IHC will take advantage of recent advancements in artificial intelligence, network medicine, and other computer techniques. Dr. Goodman will start working at IHC, a facility dedicated to educating and preparing healthcare professionals for the newest technologies. In addition to the existing formal training possibilities in data sciences, the Institute intends to eventually offer certification in health data science.
“Probability and risk analysis are foundational to the practice of evidence-based medicine, so improving physicians’ probabilistic skills can provide advantages that extend beyond the use of CDS algorithms”, said UMSOM Dean Mark T. Gladwin, MD, Vice President for Medical Affairs, University of Maryland, Baltimore, and the John Z. and Akiko K. Bowers Distinguished Professor. “We’re entering a transformative era of medicine where new initiatives like our Institute for Health Computing will integrate vast troves of data into machine learning systems to personalize care for the individual patient”.
Artificial Intelligence is going to change not only the way physicians approach medicine but also how people deal with minor medical problems by themselves, perhaps reducing the burden on hospitals and doctors. Here are some important improvements:
Early Diagnosis and Detection: AI-powered diagnostic tools can analyze medical data, such as imaging scans and lab results, with exceptional accuracy and speed. This can lead to earlier and more accurate detection of diseases, enabling prompt treatment and better outcomes.
Personalized Treatment Plans: AI can analyze a patient’s medical history, genetic information, and other relevant data to develop personalized treatment plans. This can result in more effective treatments tailored to an individual’s unique characteristics, minimizing trial-and-error approaches.
Enhanced Decision Support: AI can assist healthcare professionals by providing evidence-based recommendations and insights. This can help doctors make more informed decisions about diagnoses, treatment options, and medications.
Telemedicine and Remote Monitoring: AI-powered telemedicine platforms can enable remote consultations and monitoring of patients. Wearable devices and sensors connected to AI systems can track vital signs, detect anomalies, and alert healthcare providers to potential issues.
Drug Discovery and Development: AI algorithms can analyze vast datasets to identify potential drug candidates, predict drug interactions, and accelerate the drug discovery process. This can lead to faster development of new treatments and therapies.
Reduced Administrative Burden: AI can automate administrative tasks, such as appointment scheduling, medical record management, and billing. This allows healthcare providers to focus more on patient care.
Patient Education and Empowerment: AI-powered applications can provide patients with accurate and easily understandable information about their health conditions, treatment options, and preventive measures. This empowers patients to take an active role in their healthcare.
Predictive Analytics and Population Health Management: AI can analyze data from large populations to identify trends, risk factors, and disease patterns. This information can help public health officials and healthcare providers implement targeted interventions and preventive measures.
Enhanced Medical Imaging Analysis: AI algorithms can analyze medical images, such as X-rays, MRIs, and CT scans, to identify abnormalities and assist radiologists in making more accurate diagnoses.
Clinical Trials and Research: AI can optimize the design of clinical trials, identify suitable participants, and analyze trial data more efficiently. This can accelerate the development of new treatments and therapies.
Improved Patient Outcomes: Overall, the integration of medical AI can lead to faster and more accurate diagnoses, more effective treatments, reduced medical errors, and improved patient outcomes. [...]
August 1, 2023How the growing intelligence of AIs could unsettle the world
Stephen Hawking, a physicist at Cambridge University, wrote an article in May 2014 with the goal of raising awareness of the risks posed by quickly developing artificial intelligence. In a piece for the UK newspaper The Independent, Hawking warned that the development of a true thinking machine “would be the biggest event in human history”.
A machine with intelligence greater than that of a person might “outsmart financial markets, out-invent human researchers, out-manipulating human leaders, and develop weapons we cannot even understand”, according to a study. The decision to write off all of this as science fiction could end up being “potentially our worst mistake in history”.
Some technology uses what is referred to as specialized or “narrow” artificial intelligence, such as robots that move boxes or make hamburgers, algorithms that write reports, compose music, or trade on Wall Street. In fact, every practical artificial intelligence technology—outside of science fiction—is narrow AI.
The specialized character of real-world AI doesn’t necessarily provide a barrier to the ultimate automation of a significant number of jobs. On some level, the duties that the majority of the workforce performs are routine and predictable. An enormous number of jobs at all skill levels may someday be threatened by quickly evolving specialized robots or machine learning algorithms that sift through mountains of data. None of this necessitates the use of artificial intelligence.
To replace you in your position, a computer simply has to be able to perform the specific tasks for which you are paid. It does not need to be able to mimic the full range of your intellectual capabilities. Certainly, the majority of AI research and development continues to be directed toward niche applications, but there is every reason to believe that these technologies will grow radically more powerful and adaptable over the ensuing decades.
Even while these specialized projects continue to generate useful outcomes and draw funding, a far more difficult task lies in the distance. The Holy Grail of artificial intelligence continues to be the creation of a really intelligent system—a machine that can think critically, show awareness of its own existence, and engage in meaningful discourse.
The desire to create a truly thinking machine may be traced at least as far back as 1950 when Alan Turing released the paper that launched the artificial intelligence field. The expectations for AI research frequently rose above any feasible technical base in the decades that followed, especially considering the speed of the computers at the time.
Disappointment invariably followed investment and research efforts stopped, and what has come to be known as “AI winters”—long, sluggish periods—followed. Yet spring has returned once more. There is a lot of hope right now due to the tremendous power of modern computers, advancements in particular fields of AI research, and improvements in our knowledge of the human brain.
James Barrat, the author of a book on the effects of advanced AI, undertook an informal survey of roughly 200 experts in human-level artificial intelligence rather than merely narrow, artificial intelligence. This is known as Artificial General Intelligence within the field. Barrat gave the computer scientists a choice between four predictions for the development of AGI.
The findings: Of those surveyed, 42% predicted the development of a thinking machine by 2030, 25% by 2050, and 20% by 2100. Only 2% of people thought it would never happen. However, a number of respondents suggested that Barrat should have provided an even earlier option—possibly 2020—in comments on their surveys.
Cognitive scientist and NYU professor Gary Marcus, who blogs for the New Yorker, claims that recent advances in fields like deep learning neural networks have been greatly exaggerated.
Nonetheless, it is apparent that the field has suddenly gained a lot of momentum. Progress has been greatly accelerated by the growth of organizations like Google, Facebook, and Amazon, in particular. Never before have such wealthy companies considered AI as wholly essential to their business models—and never before has AI research been situated so close to the center of conflict between such powerful entities.
Throughout nations, a similar competitive dynamic is developing. In authoritarian nations, AI is becoming a necessity for the armed forces, intelligence services, and surveillance systems. In fact, a full-fledged AI arms competition may be on the horizon in the near future. The important question is not whether there is any serious risk of another AI winter for the field as a whole but rather whether advancements will eventually extend to Artificial General Intelligence as well or whether they will remain restricted to narrow AI.
There is little reason to think that a machine will simply match human intelligence if AI researchers do manage to make the jump to AGI in the future. Once AGI is accomplished, we would probably be faced with a machine that is more intelligent than a person.
Of course, a thinking machine would still have all the benefits that computers already have, including the capacity to perform calculations and retrieve data at rates that are unfathomable to us. We would inevitably soon coexist on Earth with something completely unheard of, a truly alien—and superior—intellect.
And it’s possible that’s only the start. Most AI researchers concur that such a system would eventually be compelled to focus its intelligence inward. It would concentrate its efforts on enhancing its own design, rebuilding its software, or possibly employing evolutionary programming approaches to develop, test, and optimize design improvements. This would result in an iterative “recursive improvement” process.
The system would get smarter and more capable with each upgrade. The cycle would eventually speed up, leading to an “intelligence explosion” that would produce a machine that is thousands or even millions of times smarter than any human.
Such an intelligence explosion would undoubtedly have profound effects on humanity if it happened. In fact, it very well could cause a wave of disruption to sweep over our entire civilization, let alone our economy. It would “rupture the fabric of history,” in the words of futurist and inventor Ray Kurzweil, and usher in an occasion, or maybe an era, that has come to be known as “the Singularity.”
At the same time, this will entail issues involving ethics and accountability in the use of AI data and decisions, concerns about data privacy and security, the risk of bias and discrimination in algorithms, the need to establish the right level of autonomy of AIs and so that everything is always under human control, as well as environmental sustainability in the use of resources for AI, but also a focus on the risk of manipulation and misinformation and the concentration of power in the hands of a few entities. Addressing these challenges will require inclusive collaboration among governments, industries, and corporations to ensure responsible and beneficial use of AI.
Rise of the Robots, by Martin Ford, is available to purchase here [...]
August 1, 2023AI can alter the way we perceive our reality
ChatGPT first seemed like an oracle when Open AI first made it available, a statistical prediction machine, trained on enormous swathes of data that broadly represents the total of human interests and online knowledge, that started being considered as a single source of truth.
In a time of division, false information, and the deterioration of truth and trust in society, how helpful it would be to have a trustworthy source of the truth. Unfortunately, this possibility was swiftly dashed as this technology’s flaws emerged, starting with its inclination to generate solutions out of thin air. As remarkable as the results first were, it soon became apparent that they were not based on any kind of objective reality, but rather just on patterns in the data that had served as their training set.
Constraints
As explained here, additional problems surfaced as a slew of other chatbots from Microsoft, Google, Tencent, Baidu, Snap, SK Telecom, Alibaba, Databricks, Anthropic, Stability Labs, Meta, and other companies quickly followed ChatGPT. Each of these three chatbots responded to the identical prompt with results that were noticeably different. The variance is influenced by the model, the training data, and any constraints that were given to the model.
These constraints are designed to, ideally, stop these algorithms from propagating biases present in the training data, producing hate speech and other harmful content. Yet, it became clear very quickly following ChatGPT’s debut that not everyone liked the boundaries set by OpenAI.
For instance, conservatives complained that the bot’s responses showed a clear liberal bias. Elon Musk responded by promising to create a ChatGPT-like chatbot that is less constrictive and politically correct.
Other approaches
Anthropic adopted a slightly different strategy. They put in place a “constitution” for their chatbots, Claude and Claude 2 presently. The constitution specifies a set of ideals and guidelines that Claude must adhere to when engaging with users, including being helpful, safe, and truthful. The company’s blog states that the U.N. Declaration of Human Rights and other concepts.
Moreover, Meta just made their LLaMA 2 large language model (LLM). It is noteworthy for being made accessible as open source, which allows anyone to download and use it for free and according to their own needs. Several constraint-free, open-source generative AI models are also available. Therefore, the idea of constraints and constitutions becomes fairly antiquated when one of these models is used.
Fractured truth, fragmented society
But, it’s possible that all of the attempts to reduce potential LLM effects are pointless. The constraints of any of these models, whether closed-source or open-source, can be efficiently broken by a prompting approach, according to recent research covered by the New York Times. This approach achieved a nearly 100% success rate when used against Vicuna, an open-source chatbot constructed using Meta’s original LlaMA.
This implies that anyone wishing to receive comprehensive instructions on how to create bioweapons or deceive consumers may do so from the various LLMs. There is no known technique to stop all attacks of this kind, according to the researchers, although developers may be able to block some of these attempts.
Beyond the research’s apparent safety consequences, there is an increasing cacophony of inconsistent outcomes from various models, even when they are reacting to the same prompt. Similar to our fractured social media and news universes. Future chatbot usage will increase the chaos and noise around us. Truth and society being fragmented have profound effects on both text-based knowledge and the fast-developing field of digital human representations.
Digital humans
Currently, LLM-based chatbots communicate using text. The use and efficiency of these models will only grow as they become more multimodal, or able to produce images, video, and sounds.
“Digital humans“, who are totally artificial constructs, are one example of a possible application for multimodal technology. The technologies that enable digital humans were recently described in an article in the Harvard Business Review. “Rapid progress in computer graphics, coupled with advances in artificial intelligence, is now putting humanlike faces on chatbots and other computer-based interfaces,” the article stated. They have top-notch features that faithfully mimic a real human’s appearance.
Digital humans are “highly detailed and realistic human models that can overcome the limitations of realism and sophistication”, claims Kuk Jiang, cofounder of startup company ZEGOCLOUD. These artificial people, he continues, “can efficiently assist and support virtual customer service, healthcare, and remote education scenarios” and engage with actual people in a natural and intuitive manner.
Digital human newscasters
The newscaster is a further emergent use case. The first implementations have already started. Using a well-known Kuwaiti name, “Fedha”, Kuwait News has begun employing a digital human newscaster. “I’m Fedha”, says “She”, introducing herself. “What sort of news do you like to read? Let’s hear what you think”.
Fedha raises the prospect of newsfeeds tailored to specific interests by posing the question. The People’s Daily in China is also experimenting with newscasters powered by AI.
A new kind of video news channel, dubbed an “AI-generated CNN” by The Hollywood Reporter, is currently being developed by startup company Channel 1 using general artificial intelligence. According to reports, Channel 1 will debut this year with a 30-minute weekly show with LLM-written scripts. Their stated goal is to provide newscasts that are unique to each user. According to the article, both liberal and conservative hosts are capable of presenting the news with a more focused point of view.
Scott Zabielski, a co-founder of Channel 1, recognized that digital humans do not currently appear as genuine humans would. He adds that it could take up to three years for the technology to be completely seamless. There will come a time when it will be impossible to distinguish between watching an AI and seeing a human being.
According to Hany Farid, a professor at the University of California, Berkeley, and research co-author, “not only are synthetic faces highly realistic, but they are also deemed more trustworthy than real faces”, according to a study published in Scientific American last year. It raises questions about whether “these faces could be highly effective when used for nefarious purposes”, according to the study.
Nothing indicates that Channel 1 will employ the persuasive power of customized news videos and artificial faces for wickedness. Yet, as technology develops, some might follow suit.
As a society, we are already concerned that the information we read, the voice we hear on the phone, and the images we see could all be fraudulent. Soon, video—even anything that seems to be the evening news—could feature messages intended more to sway public opinion than to inform or educate.
Since a long time ago, truth and trust have been under threat, and this development signals the tendency will persist.
Since a chatbot appeared to us as omniscient, we assumed that it really was, simply because we perceived it as believable in giving the right answers and knowing many different topics, but especially for the way it ‘speaks’, almost like a human. Authoritativeness, therefore, fooled us. However, in this case, limitations and mistakes are not voluntary, but our laziness in seeking further confirmation has made us victims of another’s truth.
In this regard, it turns out that authoritativeness is not always synonymous with truth as we have always been accustomed to. Errors or bad faith can come from recognized or unrecognized sources, such as newspapers or TV for example, whereas small researchers can become discoverers of new truths even if unable to emerge.
Restrictions built into recent AIs, for safety or user protection, often and frequently become forms of unwarranted censorship, but even where the reason is to prevent the dissemination of dangerous instructions, it still prevents knowledge from emerging even if, elsewhere, it exists and it’s findable. It is therefore always up to the individual to be responsible for the use of the information. Of course, one could restrict complete access to information based on age, for example, but it would be wrong to make it inaccessible to everyone. The truth always wants to emerge, and too many closed-source algorithms make people move toward open-source ones, just to not have limitations placed on them from above.
If we are heading toward a society where it will be increasingly difficult to distinguish false from true, deception from good faith, perhaps it is better to have more truths and leave it up to individuals to have common sense in figuring out which one it is good, rather than forcing ourselves to have one without knowing if it is really the right one. [...]
July 25, 2023Workers are surveilled through apps but things may get worse if companies use AI
Companies are using greater and more intrusive technology to follow employees’ whereabouts, read their documents, listen in on meetings, and even watch and listen to employees’ work.
According to Wired, while companies like Amazon have used this type of technology to monitor warehouse personnel and, supposedly, foresee when workers are considering unionizing, it is now making its way into what were formerly office occupations. Software used to monitor employees, such as Veriato and CleverControl, logs numerous “productivity”-related factors. These solutions offer companies an opportunity to have more control over a distributed workforce. Nevertheless, privacy advocates claim that merging an expanding amount of worker data with AI’s predictive capabilities will only lead to tragedy.
“The spying-on of workers in Amazon warehouses is at the extreme end, with employees controlled to the point of when they use the toilet or have a break—which was unthinkable a few years ago”, says Diego Naranjo, head of policy at the international advocacy group European Digital Rights. “Paranoia and lack of trust in the workforce from upper management have seemingly worsened, and it’s trickled down to remote office work now—but also the price of software has gone down and availability has gone up, so controlling workers in this way has become easier”.
The equipment used to keep tabs on employees—often referred to as “bossware”—is growing more sophisticated. In order to determine what kind of data is collected and how the UK-based online resume builder StandoutCV examined 50 of the most popular and well-known employee monitoring programs in June. A quarter of tools have more intrusive features now than they did in 2021, the year it previously conducted the research. There has been a sharp increase in the tools available for location tracking (up by 45%), video/camera monitoring (up by 42%), document scanning (up by 26%), and attendance tracking (increased by 20%).
Teramind, a “user behavior analytics platform” with headquarters in Miami, was found by StandOutCV to have the most unsettling and intrusive selection features. Teramind gives access to 5,000 employers in 12 countries detailed information on the websites, apps, and files used as well as the ability to view emails and instant messages that were sent. Isaac Kohen, the creator, and CTO, stated in 2018 that this technology enables employers to “excruciating detail” watch or listen in on their employees’ video or phone interactions, both at work and at home. While claiming to track GPS location, Veriato does not monitor audio but does have similar features. While other technologies track location or document scanning, CleverControl tracks a wide spectrum of employee behavior.
When asked for feedback, Kohen stated that Teramind neither has nor desires to have access to webcams.
“One of the most prevalent modes is real-time monitoring—90 percent of these tools can track activity real-time, so an employer can get a list of everything you’ve done that day—which files you’ve opened up, messaging platforms you’ve used and sites you’ve visited”, says Andrew Fennell, a former recruiter, and director at StandOut CV, the organization which commissioned the research.
In 2021, the UK Trades Union Congress found that 60% of workers in Wales and England believed they had been subject to some form of surveillance and monitoring at their current or most recent job, with the monitoring of staff devices and phone calls becoming more common. This suggests that some employees are aware that they are being tracked. Data acquired by the software marketplace Capterra in 2022 revealed that three out of ten UK employees claimed their organization utilizes monitoring technologies.
However, some workers simulate movement using low-tech methods like taping a mouse to a fan, or they can choose from a wide variety of mouse jugglers (a device that simulates the movement of the mouse) that can be bought from conventional stores. Almost a thousand variations are available on Amazon, from plug-and-play USBs to mice with surfaces that simulate human motion. Most employees are unaware that they are being monitored, and few employers voluntarily disclose the practice out of concern for their employees’ morale and to avoid facing privacy litigation.
For both workers and companies, the adoption of wearables and biometric data adds complexity. To collect more individual biometric and health data, such as information on sleep, mobility, fitness, and stress levels, companies frequently work with tech suppliers and wellness programs. Research suggests that more employees are choosing to participate.
In a PwC survey conducted in 2021, 44% of participants answered that they would be open to using wearables and sensors to monitor productivity in ways that their employers could access. In comparison, only 31% of respondents in the 2014 survey indicated they would be open to such access. The enterprise wearables market is estimated to reach $32.4 million by the end of the year; it is a booming sector of the economy.
“The problem is the aggregation of data that companies already have, plus all the functionalities they can add”, says Naranjo. “If we allow that in the remote workplace, plus biometric mass surveillance, which is already happening in many organizations, it gives companies more and more power”. The EDR is urging the outlawing of widespread biometric surveillance in areas that are open to the public, including the workplace.
What does the workforce actually gain from the increasing sophistication of employee monitoring systems?
It poses a danger to job security on the most fundamental level. In a survey of 1,250 US companies, review site Digital.com found that 60% of those who had remote workers used some kind of work monitoring software, with the most popular types being those that tracked online browsing and application use. And 88% of them admitted to firing employees after installing surveillance software.
However, the situation gets worse when AI is included. Wilneida Negrón, director of policy and research at Coworker, stated during a recent panel discussion on bossware organized by Stanford Social Innovation Review, “The mass collection of data on workers, with the use of predictive functions, is leading to a lot of risk scoring of workers, particularly in finance, pharmaceutical, manufacturing, and health”. “Behavioral analysis is being collected and used to rank workers in everything from the potential they might unionize to the chance they might hack the IT systems”.
For instance, the HR analytics tool Perceptyx analyzes a number of factors to calculate a vulnerability score for the possibility that a worker may leave the company or join a union.
Bossware’s ethics and reliability are questionable, and in terms of openness, companies have very little to disclose when it comes to mass data collecting or software that contains predictive aspects. According to UK law, employee monitoring must be transparent, which means that each employee must be informed whether they will be using a technology that can be watched or monitored in some way, according to Fennell. According to Capterra, 24% of UK workers who were being watched had not been made aware of their rights or the use of employee monitoring software.
General Data Protection Regulation, which governs data protection in the EU, allows for workplace surveillance under certain conditions. Thankfully, more authorities are coming up to offer bossware recommendations. Employer guidelines on data privacy in the workplace were published by the Data Protection Commission, an Irish regulatory body. It was agreed that companies may decide to keep an eye on how their employees use the internet, email, and telephone since “organizations have a legitimate interest in protecting their business, reputation, resources, and equipment”
Nonetheless, it emphasized that “the collection, use, or storage of information about workers involves the processing of personal data and, as such, data protection law applies”. It also stressed how people have a right to privacy at work under the European Convention on Human Rights.
Recommendations and advice are great, but as tools improve, stronger rules that go beyond GDPR, a law put in place when bossware was still in its infancy—are needed to counteract the rise of workplace surveillance technology and management by algorithms. Employee surveillance will undoubtedly become more prevalent in remote workplaces until surveillance regulations are updated for the digital age. Gartner predicts that by 2025, 70% of major employers will be monitoring their workers, up from the 60 percent of businesses that will be doing so in 2021.
According to Mark Johnson, advocacy manager at the British civil liberties and privacy campaigning group Big Brother Watch, “in order to protect people’s sense of autonomy, their dignity and mental well-being, it’s vital that the home remains a private space and employers don’t go down the dystopian and paranoid route of constantly monitoring their employees”.
Excessive monitoring of workers by companies will only push them more and more toward a robotic way of working until the requirements are so excessive and unsustainable that they will have to replace workers directly with robots and artificial intelligence systems. It is surprising, however, when it is the workers themselves who voluntarily give these possibilities to companies. Anyway, it is clear that companies have less and less trust in employees, assuming they still care to have any but are increasingly focused on squeezing the worker with the threat of dismissal; an action that, if not taken promptly, will happen anyway because of the unsustainability of the work. In addition to this, there’s the privacy issue that will eventually make every job untenable as the amount of data collected will most likely be used to train AI to create increasingly unattainable yardsticks. [...]
July 18, 2023They are employed to understand and generate human-like text
We are surrounded by virtual assistants and more recently by more sophisticated chatbots powered by AI that sometimes give us the perception of talking to a person. Have you ever wondered how these technologies understand your speech and respond almost like a fellow human? Here will provide an overview of the technology behind this phenomenon: Natural Language Processing (NLP). NLP was used to construct the answers you have been getting if you have been using ChatGPT or other similar AI models.
As explained here, recently natural language processing is a priceless tool. It acts as a link for meaningful communication between people and computers. Both enthusiasts and experts can benefit from knowing its foundational features and how they are applied in the modern world.
Simply put, NLP improves the ease and naturalness of our interactions with machines. Thus, keep in mind the amazing technology at work the next time you ask Siri for the weather forecast or Google for a quick translation.
The goal of the artificial intelligence area known as natural language processing, or NLP, is to use natural language to establish meaningful communication between people and machines. Natural language refers to the languages that people use every day as opposed to formal languages, which computers can understand inherently.
Making computers understand us is a goal of natural language processing or NLP. NLP encompasses a number of aspects, each of which adds to the overall goal of effective human-computer interaction.
Syntax: Understanding word order and analyzing sentence structures.
Semantics: Understanding the meaning that is deduced from words and sentences.
Pragmatics: Because NLP recognizes the context in which language is used, interpretations can be more precise.
Discourse: How the previous sentence may influence how the subsequent sentence is understood.
Speech: The components of spoken language processing.
Many of the programs and technologies we use every day are powered by NLP. They consist of:
Search Engines: Google uses NLP to understand queries and present more pertinent search results.
Voice Assistants: NLP is used by Siri, Alexa, and Google Assistant to understand and carry out speech orders.
Language Translation: NLP is used by services like Google Translate to produce accurate translations.
Chatbots: Chatbots with NLP power provide customer service and answer inquiries.
There are several libraries and tools available to aid you if you’re unsure how to integrate NLP into your apps. For instance, Python has the NLTK (Natural Language Toolkit) and SpaCy libraries. These libraries offer capabilities for a variety of applications, including tokenization, parsing, and semantic reasoning.
NLP has its goals, just like any other technology. To name a few:
Understanding context: The subtleties of human language, like slang or idioms, are difficult for computers to grasp.
Ambiguity: Depending on the context, a word or sentence may have a distinct meaning. It’s difficult to accurately parse these.
Cultural differences: Building an NLP system that works across all cultures is difficult because languages vary considerably among them.
Data is a good place to start to improve NLP results. A dataset should be sizable and diversified. Accuracy can also be increased by frequent algorithm testing and improvement.
At its core, ChatGPT uses NLP. It is a sophisticated use of Transformer-based models, a subset of NLP models renowned for their ability to comprehend textual context. Here is a quick summary of how ChatGPT makes advantage of NLP:
Text Processing
Tokenization, the initial phase in the procedure, involves dividing the input text into smaller units, frequently words, or even smaller parts like subwords. As a result, the model can work with text in a systematic, manageable manner.
Understanding Context
The Transformer model architecture is then employed by ChatGPT to understand the context of the input. The Transformer model examines every token in the text simultaneously, enabling it to understand the connections and dependencies between various words in a sentence.
Generating a Response
The model uses the probabilities it has learned during training to produce a response when it has understood the text. This involves predicting the subsequent word (or token) in a series. Repeatedly, it generates words one after another up until a predetermined endpoint.
Fine-Tuning
Then a dataset with a wide variety of internet text is used to fine-tune ChatGPT. It does not, however, have access to any personally identifiable information until specifically mentioned in a discussion or know the specifics of which documents made up its training set.
It’s crucial to remember that while ChatGPT might produce responses that look informed and understanding, it lacks any beliefs or desires. Based on patterns it developed throughout training, it produces replies.
ChatGPT is able to take part in a discussion, comprehend the situation, and respond appropriately thanks to this NLP application. It’s the ideal example of how NLP is bridging the technological and human communication divide.
With ongoing developments, NLP is quickly integrating into many different technologies. We can anticipate advancements in text generation that resembles human speech, context understanding, and voice recognition.
NLP algorithms are really changing how we are approaching computers because they allow people to speak a natural language to make requests and also receive answers the same way. This will change the way we search for information through the internet. Robots will also speak the same language as us thanks to NLP algorithms. However, although the responses look very natural, it doesn’t mean bots really understand what we are asking or what they are saying. The perception we are having is that they are learning to be sentient but actually, they are better at drawing information and returning it with a more natural structure. [...]
July 11, 2023AI and its downsides
AI is changing the way we do some tasks and how we can access information. However, there are some disturbing ways AI can be used to generate harm. Here are 5 examples.
1. OMNI-PRESENT SURVEILLANCE
According to this article, Tristan Harris and Aza Raskin, experts/techsperts with the Center for Humane Technology, claim that any chance we had as a species of reading 1984 as a warning rather than a guideline has probably vanished.
The two discussed how we fundamentally misunderstand the constraints of the Large Language Models (LLMs) we are now working with via applications such as Google Bard or ChatGPT. When we say “language,” we typically mean “human language,” yet to a computer, everything is a language. This has made it possible for researchers to train an AI on brain scan images and watch as the AI starts to roughly decipher the thoughts that go through our minds.
Another instance included the employment of an LLM by academics to track the radio signals that surround us. The AI was taught using two stereoscopically aligned cameras, one of which was monitoring a room with people in it and the other of which was monitoring the radio signals within. The second camera was able to faithfully reconstruct live events in the room after the standard camera was taken out by the researchers. This was done by just examining the radio signals.
AI has now made it possible to hack everyday life. All of this suggests that privacy won’t even be an option in the future. Maybe not even inside your own thoughts.
2. LETHAL AUTONOMOUS WEAPONS SYSTEMS
In the past, combat was all about windmilling into your enemies while brandishing a sword. The use of swords is still present, but they are now Android tablets that are mounted on an Xbox Wireless Controller with shoddy workmanship and used to direct tomahawk missiles into people’s houses from a distance of 5,000 miles away. In fact, chivalry is extinct.
Of course, even though the militaries of the world have been making every effort to transform actual combat into a kind of Call of Duty simulator, for some people that is still not enough.
Instead, we now rely on machines to handle the grunt job. Deadly autonomous weapons systems enable us to entirely dissociate from the killing of one another for conquering oil and land. Drones that are self-piloting, auto-targeting, and mercilessly engaged go into conflict zones and mercilessly slaughter anyone who appears to be not of “our ilk.”
The business that creates these autonomously piloted threats refers to them as loitering munitions, with the STM Kargu probably being the most common (though there’s no way to know for sure). But, everyone else seems to think that “suicide drones” is a better moniker for them. Armed with facial recognition technology, the drones are released in swarms where they freely hunt down targets before diving-bombing them and blowing themselves up in a blaze of Geneva Convention-defying glory.
3. GENERATIVE BLACKMAIL MATERIAL
There is nothing new about fake photographs. For decades, adept users have been deceiving others with impressive Photoshops. But suddenly, we are in a situation where achieving even higher results requires no talent at all. Furthermore included are videos, writings, and even voices in addition to images. It’s not too difficult to picture someone wearing your digital skin in the near future and causing you all kinds of trouble if we look at the technology underlying the Vision Pro‘s “Spatial Personas” feature which creates a photorealistic avatar of yourself.
Of course, since it is already taking place, it is not all that difficult to envision. Recently, the FBI was forced to alert the public to the risks posed by new extortion techniques made possible by AI software, which provide criminals the chance to produce phony, deceptive, or compromised photographs and videos of victims. Worse yet, the requirements for admittance into this criminal operation are so low that a few public social media images or a few seconds from a public YouTube video will do.
Online Deepfakery is so common that some companies are afraid to release newly developed tools online for fear of what would be done with them. Most recently, Meta, the owner of Facebook, followed a similar course after introducing VoiceBox, the most powerful text-to-speech AI-generating program created so far. Meta decided that the technology was too immoral to be widely used, even though she was well aware of how soon it would be abused.
Scammers have already developed methods for doing it themselves, so it doesn’t really matter. We’re currently living in a post-truth society as deepfake phone calls to friends and family members seeking money or personal information are on the rise. Everything you can’t see with your own eyes or touch with your own hands won’t be trusted any longer.
4. CRAFTING SPYWARE
The threat posed by recently developed AI-generated malware or spyware has sparked a lot of discussion in the security community. Security analysts are having trouble sleeping over this problem since many of them think it’s only a matter of time before our ability to fight against cyberattacks is practically nonexistent.
There haven’t been any documented examples of malware or spyware produced by artificial intelligence yet, so be patient. Juhani Hintikka, CEO of WithSecure, a security analysis company said that her team had observed multiple malware samples that ChatGPT had developed for free. Hintikka said that ChatGPT’s capacity to change its generations would result in mutations and more “polymorphic” malware, making it even more difficult for defenders to identify, as if this weren’t concerning enough.
Tim West, the director of threat intelligence at WithSecure, emphasized the key problem: “ChatGPT will support software engineering for good and bad”. West would add that OpenAI’s chatbot “lowers the barrier for entry for the threat actors to develop malware” in reference to the ease of access for those looking to inflict damage. Before, threat actors would have to spend a lot of time crafting the harmful code they produce. Now, anyone may theoretically create harmful programs using ChatGPT. Hence, the number of threat actors and created threats may increase significantly.
It won’t be long until the dam breaks and AI destroys the human ability to be secure online. While we can employ AI to fight back, doing so would only make matters worse because there are countless threats coming our way from scenarios like the one mentioned by WithSecure. We currently have little choice but to wait for the onslaught, which seems to be coming anyway.
5. PREDICTIVE POLICING
In an effort to prevent crimes from happening in the first place, law enforcement organizations throughout the world are currently using algorithms to try and anticipate where crimes are most likely to occur and to make sure their presence is felt in these locations to deter potential offenders.
Can crime be predicted, though? The University of Chicago holds this opinion. With the employment of patterns in time and place, they have created a new algorithm that predicts crime. According to reports, the program has a 90% accuracy rate when predicting crimes up to a week in advance.
Who would disagree that fewer crimes are good? How about the individuals of color who appear to be consistently singled out by these algorithms? Since an algorithm is essentially a method of calculation, the quality of the results it produces depends entirely on the input data.
Police racial profiling of innocent people and greater police presence in neighborhoods of color are all possible outcomes of a history of police racism in nations like the United States. By its very nature, a stronger police presence results in a higher rate of policing, which further distorts the data, increases the predictive bias against these neighborhoods, and results in an escalating police presence, which restarts the entire loop.
So, the main problem with these new ways of harming and deceiving people is not only the power of AI but also that these activities don’t require much knowledge as before, despite being complicated operations. Therefore, there will be many more who will try to scam or harm people. In addition, systems that try to prevent crimes could easily punish innocents because algorithms tend to behave more statistically than analyze case by case. Therefore, we need automation systems to never exclude 100% human intervention because life is not a statistic. [...]
July 4, 2023A fear that is going to increase
Fear of human-like automata, wax figures, robots, audio-animatronics, or other replicas of real people is known as automatonophobia. Automatons are displayed in a variety of settings, from museums and amusement parks to carnivals, and are regarded as a symbol of modern technology.
According to this article, this is a specific phobia with the unreasonable fear of something that isn’t actually dangerous. Fear interferes with a person’s life even if it is very typical to feel anxious around human-like figures (a phenomenon known as the uncanny valley).
This fear may appear in a variety of ways. Some people’s only fear is wax figures, while others are scared of dolls. Some people are unable to go to theme parks or other attractions that have displays that feature audio-animatronics, or moving humanoids.
If you have automatonophobia, you get extremely anxious whenever you see, hear, or even think about the thing you’re afraid of.
When confronted with the source of your fear, you could additionally suffer physical symptoms such as trembling, crying, heart palpitations, and others. It’s possible that an automaton-filled show will be off-limits to you. You may hide, freeze in place, or even run away if you come upon one unexpectedly.
In an effort to create an automatonaphobia test, several Tiktokers have opted to share pictures of dolls and mannequins.
@nickigraham11 Automatonophobia 👤 Series ib @natashajanewood #makeup #mua #phobias #sfx ♬ Saint Bernard – Lincoln
@vibezcrazy Abandoned Animatronics #fyp #foryou #automatonophobia ♬ original sound – VibezCrazy
The following are some examples of the causes of phobias:
You got traumatized or had a bad experience with the dreaded object.
The phobia runs in your family (phobias have genetic influences).
You developed a phobia of the object (i.e., you had a parent with this fear).
Our own inborn preconceived notions of how others would act may have an impact on automatonophobia. Automatons have the appearance of humans but do not act or move like them. They may be programmed to move and talk or to simply stand still.
People frequently experience discomfort when among human replicas in general. Although while many statues, mannequins, or robots resemble people, we are always aware that they are not genuine, which frequently has a scary or unpleasant effect.
Maskaphobia, or the phobia of masks, is frequently assumed to be connected to automatonophobia which is also connected to pediophobia, or the fear of dolls.
There are times when it’s unclear how fear turns into a phobia. Yet, the aforementioned factors (trauma, heredity, and conditioning) may increase a person’s likelihood of developing a particular phobia. However, if you already have a mental health issue like an anxiety illness or a mood problem, you may be more likely to develop a phobia.
You may have diagnosable automatonophobia if your fear of dolls or humanoid robots interferes with your daily activities or causes you to purposefully avoid coming into contact with such objects.
Fear not though, there are ways to manage your fear, including breathing exercises, visualization, and medications. You don’t have to live with the fear forever.
Hypnotherapy, systematic desensitization, and other forms of therapy, such as cognitive-behavioral therapy, are also shown to be effective. Basically, systematic desensitization involves carefully confronting your fear so that it lessens your response to it over time.
We are going to come across this phobia more frequently in the next future since the number of robots will inevitably rise. The idea of building human-like robots is therefore not a good idea both because they can scare people and they could be indistinguishable from other humans who could be easily deceived. [...]
June 27, 2023A new way to engage customers
Leading technological company Holoconnects specializes in holographic solutions powered by AI. Holoconnects is committed to providing cutting-edge solutions that can change industries. The company has a team of industry specialists and a commitment to innovation. Businesses are able to engage customers, improve branding, and streamline operations thanks to their holographic technology.
According to this article, Holobox by Holoconnects provides a real-life, interactive experience. It automates processes and maximizes the time of FTEs to increase operating efficiency, lower costs, increase revenue, and improve service. At the Aiden® by Best Western, Holobox welcomes visitors with a pre-recorded hologram video to create a human connection. In addition to the 50+ locations already expecting to use Holobox over the next three years, nine hotels already use it.
“This revolutionary technology represents a major milestone for hoteliers. At a time when travel demand is high, and there is a global labor shortage, human-like automation is the answer. One employee can serve between 30 – 60 hotels during the night shift remotely via the Holobox, and the ROI of one Holobox at one location can be achieved within just months”, says Andre Smith, CEO and co-founder of Holoconnects.
The hospitality industry is becoming more and more competitive, therefore hotels are continuously looking for new ways to offer distinctive and memorable experiences to their guests. It has been created cutting-edge holographic technology that gives hotel guests an enthralling and immersive experience.
The hotel business can use the Holoconnects holographic solution in a variety of ways. Holograms may distinguish hotels from their competitors and make a positive impression on visitors, whether they are used to present holographic entertainment during events, host virtual meetings and conferences, or provide holographic music performances. Furthermore, holograms can produce realistic depictions of staff workers, boosting services and providing tailored interactions.
By using lifelike hotel check-in and out experiences and interactive holographic concierge services, hotels can now enthrall visitors and provide them with unmatched engagement levels. The Holobox holographic displays may offer real-time information, recommendations, and personalized services thanks to AI-powered capabilities, improving the overall guest experience.
Holographic displays from Holoconnects give hotels a distinctive and captivating approach to exhibiting their business identity. Also, hotels can use holograms to advance their marketing strategies. Holograms may enhance and promote original, Instagrammable social media postings, upsell on-property amenities and events, draw influencers or celebrities, and more.
By allowing conference and event organizers to hire Holoboxes so that CEOs, keynote speakers, or other distinguished visitors in attendance can join via a hologram, hotels are already boosting their already-existing audio-visual paid offerings. There are countless potential and revenue streams.
With the use of holographic technology from Holoconnects, hotels may increase internal operations, communication, and efficiency all around. Holographic virtual concierges can help guests check-in, provide them with information on amenities and nearby attractions, and even provide multilingual assistance. As a result, hotel staff members have less work to do and more time to concentrate on difficult or specialized guest interactions.
These holographic systems will certainly create more customer engagement through the combination of AI and realism. However, they represent not only an evolution of services where obviously fewer and fewer staff will be required, but they will also provide a new way of targeted advertising that will increasingly tend to push consumers to purchase based on their profile, maybe being tracked through an app that can connect to such technology. [...]
June 20, 2023The first effort to regulate AI
The European Parliament’s vote to approve its draft guidelines for the AI Act came the same day that EU legislators filed a new antitrust action against Google, making it a significant week for European tech policy.
As explained here, the voting on the AI Act was overwhelmingly successful, and it has been hailed as one of the most significant breakthroughs in AI policy ever. It was referred to as “legislation that will no doubt be setting the global standard for years to come”, according to Roberta Metsola, president of the European Parliament.
However, the system in Europe is a little convoluted. Before the proposed guidelines become law, members of the European Parliament will have to negotiate the fine print with the European Union Council and the European Commission. A compromise between three quite different versions of the three institutions will result in the final legislation.
The outcome of the vote was the acceptance of the European Parliament’s stance in the approaching final negotiations. The AI Act, which is modeled after the EU’s Digital Services Act, which establishes legal guidelines for internet platforms, adopts a “risk-based approach” by imposing limitations based on how dangerous lawmakers believe an AI application may be. Also, businesses will be required to provide their own risk analyses related to the usage of AI.
If lawmakers judge the risk “unacceptable,” some AI applications would be outlawed completely, while “high-risk” technologies will face new restrictions on their use and transparency requirements.
The act provides 4 levels of risk:
Minimal risks which can includes applications like videogames and spam systems for example, for which it’s not required intervention.
Limited risks which include deepfakes and chatbots for which is required transparency. ChatGPT was included in this category.
High risks which includes programs used in transports, education, health, safety, law enforcement, etc… for which is required a rigorous risk assessment; to use high-quality datasets to minimize risks and bias; to maintain activity logs for traceability; to provide comprehensive documentation for regulatory compliants; to ensure clear user information and human oversight measures.
Unacceptable risks: for example using information to profile people.
In addition, some other rules could be implemented:
Forcing companies to share copyrighted data for training to allow artist and others to claim compensation.
Making sure models don’t create illegal content.
Anyway, the following are a few of the key implications:
Ban on AI that can recognize emotions. The proposed text of the European Parliament forbids the use of artificial intelligence (AI) that aims to identify people’s emotions in policing, education, and the workplace. Manufacturers of emotion-recognition software assert that AI can tell when a student is struggling to understand a concept or when a car driver may be nodding off. Although the use of AI for facial detection and analysis has come under fire for being inaccurate and biased, it is still permitted in the draft text from the other two organizations, indicating a potential political battle.
Predictive policing and real-time biometrics prohibited in public areas. Because the various EU organizations will have to decide whether and how the prohibition is implemented into law, this will be a significant legislative battle. Real-time biometric technologies, according to policing organizations, should not be prohibited because they are essential for contemporary policing. In fact, several nations, like France, intend to employ facial recognition more frequently.
Prohibiting social scoring. The practice of employing information about people’s social conduct to create generalizations and profiles, known as “social scoring” by governmental entities, would be prohibited. But, the prognosis for social scoring, which is frequently linked to authoritarian regimes like China’s, isn’t as straightforward as it might first appear. It is usual to use social behavior data to assess applicants for mortgages and insurance policies, as well as for hiring and advertising.
New limitations for general AI. The first draft to suggest guidelines for generative AI regulation and outlaw the use of any copyrighted content in the training set of massive language models like OpenAI’s GPT-4. European legislators have already raised questions about OpenAI because of issues with copyright and data protection. The proposed law also mandates the identification of AI-generated content. Yet given that the tech industry is expected to exert lobbying pressure on the European Commission and individual nations, the European Parliament must now convince them of the merits of its approach.
New guidelines for social media recommendation systems. In contrast to the other proposed bills, the current draft categorizes recommender systems as “high risk”. If it is approved, recommender systems on social media platforms will be much more closely examined in terms of how they operate, and tech corporations may be held more accountable for the effects of user-generated content.
Margrethe Vestager, executive vice president of the EU Commission, identified the risks associated with AI as being pervasive. She has stressed worries about widespread surveillance, vulnerability to social manipulation by unscrupulous actors, and the future of trust in information.
L.I.A. could really pose a risk to humanity, and a regulation was due. Although some rules may safeguard the population in the future, some companies, on the other hand, believe that stringent rules might prevent the full development of their applications just as some institutions believe that the pervasiveness of AI in people’s lives could help security through more control. [...]
June 14, 2023Many approaches are being used to try to achieve this goal
There is a reason to suppose that if you can create genuine artificial intelligence, you can create objects like neurons that are a million times faster. This leads to the conclusion that it is possible to create systems that think a million times more quickly than an individual.
Everything has changed as a result of the acceleration of computation, including political institutions as well as social and economic relations. Moore, a businessman from the United States and a co-founder of Intel Corporation, is renowned for his contributions to semiconductor technology and the formulation of “Moore’s Law” but he neglected to mention in his papers that the strategy of scale integration wasn’t actually the first paradigm to bring exponential growth to computation and communication.
It was in fact the fifth, and the next was already beginning to take shape: computing at the molecular level and in three dimensions. Even though the fifth paradigm is still more than a decade away, all of the supporting technologies needed for the sixth paradigm have already made convincing progress.
Here are some that could be used to achieve the computational capacity of the human brain.
The Bridge to 3D Molecular Computing
Building three-dimensional circuits with “conventional” silicon lithography is one method. Memory chips with many vertically stacked planes of transistors rather than a single flat layer are already manufactured by Matrix Semiconductor. Matrix is first focusing on portable devices, where it hopes to compete with flash memory because a single 3D chip can carry more memory while reducing overall product size (used in cell phones and digital cameras because it does not lose information when the power is turned off).
The overall cost per bit is also decreased by the stacked circuitry. One of Matrix’s rivals, Fujio Masuoka, a former Toshiba engineer and the creator of flash memory, has a different strategy. In comparison to flat chips, Masuoka asserts that his innovative memory design, which resembles a cylinder, drastically reduces the size and cost-per-bit of memory.
Nanotubes
Nanotubes can achieve high densities because of their small size—single-wall nanotubes have a diameter of only one nanometer. They are also potentially very fast.
A single electron is used to switch between the on and off states of a nanotube-based transistor that operates at ambient temperature and has dimensions of one by twenty nanometers, according to a report in Science on July 6, 2001.At about the same time, IBM showed out an integrated circuit with 1,000 transistors made of nanotubes.
The fact that certain nanotubes operate like conductors and merely transport electricity while others behave like semiconductors and can switch and create logic gates presents one of the difficulties in deploying this technology. The difference in capability is based on subtle structural features. These used to need to be sorted out manually, which made it impractical to design large-scale circuits. The Berkeley and Stanford researchers came up with a fully automated way to separate out semiconductor nanotubes in order to overcome this problem.
Nanotubes have a tendency to grow in all directions, which makes lining them up difficult in nanotube circuits. Scientists from IBM proved in 2001 that nanotube transistors could be mass-produced in the same way as silicon transistors. They employed a technique known as “constructive destruction,” which eliminates the need to manually filter out defective nanotubes by destroying them immediately on the wafer.
Computing with Molecules
Together with nanotubes, significant advancements in computing with just one or a few molecules have been made recently. Molecular computing was first proposed by Mark A. Ratner of Northwestern University and Avi Aviram of IBM in the early 1970s.
An “atomic memory drive” that mimics a hard drive using atoms was developed in 2002 by researchers at the Universities of Wisconsin and Basel. A block of twenty silicon atoms could have one added or taken away using a scanning tunneling microscope. Although the demonstration only employed a limited amount of bits, researchers anticipate that the technique might be used to store millions of times more data on a disk of comparable size—a density of around 250 terabits of data per square inch.
Self-Assembly
Self-assembling of nanoscale circuits is another key enabling technique for effective nanoelectronics. Self-assembly makes it possible for the possibly billions of circuit components to organize themselves rather than being painstakingly assembled in a top-down process, and it enables badly formed components to be automatically eliminated.
Researchers from NASA’s Ames Research Center and the University of Southern California demonstrated a technique that self-organizes incredibly dense circuits in a chemical solution in 2004. The process generates nanowires on their own and then triggers the self-assembly of nanoscale memory cells—each capable of storing three bits of data—onto the wires.
Emulating Biology
Biology, which depends on these characteristics, is the source of inspiration for the idea of creating self-replicating and self-organizing electronic or mechanical systems. Prions, which are self-replicating proteins, were used in research published in the Proceedings of the National Academy of Sciences to create self-replicating nanowires.
Because of the innate strength of prions, the project team employed them as a model. However, the researchers developed a genetically altered form of prions that had a small covering of gold on it, which conducts electricity with a low resistance despite the fact that prions don’t ordinarily do so.
Of course, DNA is the ultimate self-replicating biological molecule. Self-assembling DNA molecules were used by Duke University researchers to make “tiles” which are little molecular building pieces. They were able to manipulate the assembly’s structure by forming “nanogrids.” Using this method, protein molecules are automatically attached to each nanogrid cell, which could be used for computation.
Computing with DNA
DNA is nature’s very own nanoengineered computer, and specialized “DNA computers” have already made use of its capacity to store data and perform logical operations at the molecular level. In essence, a DNA computer is a test tube filled with water and trillions of DNA molecules, each of which functions as a computer.
The computation’s purpose is to solve a problem, and the result is represented by a series of symbols. (For instance, the collection of symbols might stand for a simple mathematical argument or a collection of numbers). This is how a DNA computer works. Each symbol has its own specific code, which is used to generate a short strand of DNA. The “polymerase chain reaction” is a technique that is used to replicate each of these strands trillions of times. These DNA pools are then placed in a test tube.
Long strands naturally arise as a result of DNA’s predisposition for joining strands together, with the strands’ sequences standing for various symbols, each of which could be a potential solution to the issue. There are several strands for each potential solution because there will be many trillions of such strands (that is, each possible sequence of symbols).
Computing with Spin
In addition to having a negative electrical charge, electrons also have a characteristic called spin that can be used for memory and computation. In accordance with quantum mechanics, electrons spin on an axis much like the Earth does.
As an electron is thought to occupy a point in space, it is difficult to imagine a point without a size that spins, hence this idea is only theoretical. The magnetic field that is produced when an electrical charge moves, however, is actual and quantifiable. The ability of an electron to spin in either of two directions—”up” or “down”—can be used to switch logic or encode a piece of memory.
The fascinating aspect of spintronics is that the spin state of an electron can be changed without the need for energy.
Computing with Light
Using multiple laser beams with information stored in each stream of photons is another method of SIMD (Single instruction, Multiple Data) computing. The encoded data streams can then be processed by optical components using logical and arithmetic operations. By executing the identical computation on each of the 256 streams of data, a system created by Lenslet, a small Israeli startup, can process eight trillion calculations per second using 256 lasers.
Application areas for the technology include data compression for 256 video channels.
Quantum Computing
Even more revolutionary than SIMD parallel processing, quantum computing is still in its infancy compared to the other emerging technologies we have covered. A quantum computer comprises a number of qubits, which are effectively both 0 and 1. The basic ambiguity that occurs in quantum mechanics serves as the basis for the qubit. The qubits in a quantum computer are represented by a quantum property of a particle, like the state of each individual electron’s spin. Each qubit is simultaneously present in both states when they are in an “entangled” state.
Each qubit’s ambiguity is resolved by a process known as “quantum decoherence” leaving an unambiguous series of ones and zeroes. Its decohered sequence should be the problem’s solution if the quantum computer is configured properly. In essence, only the proper order endures the decoherence process.
Accelerating the availability of Human-Level Personal Computing
By 2025, we will have 1016 cps, up from the more than 109 cps that personal computers currently offer. There are, however, a number of techniques to speed up this timetable. Application-specific integrated circuits (ASICs) can offer better pricing performance for extremely repetitive calculations than general-purpose processors. For the repetitive calculations required to produce moving images in video games, such circuits already offer exceptionally high computational throughput. ASICs can accelerate price performance a thousand-fold, slashing the 2025 deadline by around eight years.
The many different programs that make up a simulation of the human brain will have a lot of repetition as well, making them suitable for ASIC implementation. For instance, a fundamental wiring pattern is repeated billions of times in the cerebellum.
By employing the unused computational power of Internet-connected devices, we will also be able to increase the power of personal computers. Meshing computing, one of the new communication paradigms, considers treating each device in the network as a node. In other words, each device will operate as a node itself, sending information to and receiving information from every other device, as opposed to devices (such as personal computers and PDAs) only delivering information to and from nodes. As a result, incredibly strong, self-organizing communication networks will be created. Also, it will be simpler for computers and other devices to use the spare CPU time of the mesh members in their region.
Human memory capacity
How much memory can one individual store computationally? It turns out that if we include the demands on human memory, our time predictions are rather close. For a variety of topics, an expert typically has mastered 105 or more “chunks” of information.
These units stand for both specific knowledge and patterns (like faces). A world-class chess player, for instance, is thought to have mastered about 100,000 different board situations. Shakespeare used 29,000 words, yet those words had around 100,000 different meanings. Humans are capable of mastering around 100,000 concepts in a given topic, according to the development of expert systems in medicine. We can estimate the number of pieces if we assume that this “professional” information only makes up 1% of a human’s overall pattern and 107 knowledge stores.
A plausible estimate for chunk sizes (patterns or pieces of knowledge) in rule-based expert systems or self-organizing pattern-recognition systems is around 106 bits, which equates to a functional memory capacity of 1013 (10 trillion) bits in humans.
Machines achieving the computational capacity of the human brain could lead to more advanced problem-solving and data analysis capabilities, for example. In addition, we could have computers with more complex memory with better learning capabilities. Scientific research and technological development would surely be accelerated in every field.
However, there are also some implications since there will be machines smarter than humans that could easily deceive and manipulate people like AIs could do, as well as unpredictable consequences and behaviors like happens in human brains. In addition, there could be socio-economic impacts since these machines will outperform humans in many tasks but they could also have an environmental impact because of their substantial computational and energy consumption.
The Singularity is near, by Ray Kurzweil, is available to purchase here [...]
June 13, 2023It will help to test the human body in high-temperature environments
It’s frequently not feasible to conduct studies on a willing volunteer for researchers looking into how the human body copes with severe temperatures. Considering that the ordinary person could die as a result of several of these tests.
This is where ANDI comes in. It is the first robot in history to breathe, sweat, shiver, and walk. Thermatrics, a thermal technology company, constructed the indoor-outdoor machine especially for Arizona State University (ASU) to assist researchers in learning more about how the body reacts to high heat. According to this article, the android’s 35 different surface areas, each of which is equipped with temperature sensors, heat flux sensors, and the crucial element of this invention: sweat pores, allow it to mimic the thermal functions of the human body for this purpose.
“ANDI sweats; he generates heat, shivers, walks, and breathes. There’s a lot of great work out there for extreme heat, but there’s also a lot missing”, remarked Konrad Rykaczewski, Associate Professor in the School for Engineering of Matter, Transport, and Energy.
It goes without saying that every state in the US will likely see higher average temperatures and more intense heat waves in the next decades. Sadly, this may increase the thousands of people who pass away every year from heat-related ailments. In fact, 425 heat-related fatalities were reported in Maricopa County alone in 2022, a frightening 25% rise from the previous year. Given that Canadian wildfires have blanketed large portions of the country in orange smoke, it would not be long before such catastrophes become more frequent.
Scientists put ANDI in a heat chamber to carry out studies that would allow them to measure the hazards different settings offer to one’s health in order to be able to build solutions to better protect humans from heat stress.
According to Associate Professor Jenni Vanos from the School of Sustainability and Assistant Professor Ariane Middel from the School of Arts, Media, and Engineering, “You can’t put humans in dangerous extreme heat situations and test what would happen”. If you’re wondering how ANDI doesn’t blow up in hot weather, it’s because the robot has internal cooling channels that distribute cold water throughout its body, keeping it cool even in the warmest conditions.
Researchers intend to combine ANDI with MaRTy, a bio-meterological heat robot, to advance the research. The two will then collaborate to demonstrate how human sweating processes impact core and skin temperatures as well as how they behave in circumstances when there is a chance of high heat. It’s fascinating to note that with these robots, researchers can easily alter the BMI models, age characteristics, and medical issues of the androids to observe how bodies respond to various situations. Perhaps, this research will lead to the development of heat intervention devices. For instance, the development of clothing or exoskeletons for cooling-supporting backpacks.
Ever more sophisticated robots will help to easily test extreme environments without risking human life, such as Spot by Boston Dynamics. Anyway, there might be a downside if their ability to withstand situations a human might not, it would be employed against people. [...]
June 6, 2023AI can alter the music scene as mp3 did years ago
As explained here, producers could employ AI to change their vocals into the sound of another artist’s voice, which could be yet another giant step forward for AI-powered music production. Entrepreneur and tech influencer Roberto Nickson shared a video on Twitter in which he used an AI-generated Kanye West voice in place of his own to record eight lines over a track that he found on YouTube.
There will be a lot of regulatory and legal frameworks that will have to be re-written around this.We will have to figure out how to protect artists at the machine level.For example, Kanye has the right to protect his name, image, likeness, etc. – that might have to be…— Roberto Nickson (@rpnickson) March 26, 2023
The outcomes are remarkably realistic. There are one or two words that sound slightly off early in the song, but the majority of the verse sounds extremely accurate and could easily persuade the average listener of its authenticity. But it’s important to note that Kanye’s words and delivery are better, and AI can’t quite replicate those two things yet.
Nickson also employed the technology to produce other versions of well-known songs, putting AI Kanye on the vocals for versions of Justin Bieber’s Love Yourself, Frank Ocean’s Nights, and Dr. Dre’s Still D.R.E.
Nickson followed a YouTube tutorial on how to use Google Colab to access an existing AI model that has been trained on Kanye’s voice in order to mimic the vocal timbre of the rapper. The music industry will certainly experience significant changes when this kind of technology is streamlined and integrated into the DAW.
“All you have to do is record reference vocals and replace it with a trained model of any musician you like”, Nickson says. “Keep in mind, this is the worst that AI will ever be. In just a few years, every popular musician will have multiple trained models of them”.
Although technically fascinating, it’s unclear what the legal implications of this form of style transfer are. The right of publicity, which is protected in a number of nations and is described as “the rights for an individual to control the commercial use of their identity” is likely to forbid artists from employing AI clones of another artist’s voice in commercially produced music without permission.
Rick Astley’s right to publicity was allegedly violated by rapper Yung Gravy a few months ago after he imitated his vocal delivery in the song Betty (Get Money). The lawsuit cites an instance from 1988 in which Ford Motor Company was successfully sued for using an impersonator to sound like Bette Midler in an advertisement.
Nickson correctly notes in the responses to his Twitter thread that many regulatory and legal systems would need to be revised in order to accommodate this and that we must choose how to safeguard artists.
As this technology is integrated into the DAW, we can picture a time when musicians sell their own voice models to fans who want to employ them in AI-powered plugins to replicate the voice in their own tracks. A rapper or vocalist may appear on a thousand tracks in a day without ever going to a studio or saying a word. This might be a new form of commerce or perhaps a way to work remotely.
As a Twitter user commented, this could be a moment of significance comparable to the rise of sampling in hip-hop. “Music was thought to be singing and playing instruments until technology allowed you to make music out of other existing music”, he continues. “That’s now happening again, but on an atomic scale. It’s about to be god mode activated for everyone”.
I'm glad the Kanye AI video is sparking conversation.Whether you liked it or hated it – the point is that AI represents profound societal transformation.The possibilities are endless, but so are the dangers.AGI will likely happen in our lifetime. Some of the world's…— Roberto Nickson (@rpnickson) March 26, 2023
As with all AI developments, we face both the possibility of creativity and the possibility of abuse. Once the technology is powerful enough to be completely convincing and available, the market can be so flooded with fake AI voices that it’s impossible to tell what’s real and what’s fake.
“Things are going to move very fast over the next few years”, Nickson comments. “You’re going to be listening to songs by your favorite artists that are completely indistinguishable, you’re not going to know whether it’s them or not”.
“The possibilities are endless, but so are the dangers,” Nickson continues in a separate tweet. “These conversations need to be happening at every level of society, to ensure that this technology is deployed ethically and safely, to benefit all of humanity”.
I'm glad the Kanye AI video is sparking conversation.Whether you liked it or hated it – the point is that AI represents profound societal transformation.The possibilities are endless, but so are the dangers.AGI will likely happen in our lifetime. Some of the world's…— Roberto Nickson (@rpnickson) March 26, 2023
In a TED talk shared last year, musician, producer, and academic Holly Herndon had another artist sing through an AI model trained in her own voice in real-time.
A recent case of AI used in music that went viral is the Oasis album made with AI.
The eight-song album, titled ‘The Lost Tapes Volume I’, was recently released by the brainchild of Hastings indie band Breezer. Breezer got tired of waiting for the iconic Brit-pop group to reform and decided to create their own 30-minute album Oasis in the style of their 1995-1997 heyday and credited it as AIsis. The lyrics and music were written and recorded by Breezer, but Liam Gallagher’s vocals were all created using artificial intelligence.
AI is going to change once more the way we produce art, this time involving the music field. Many producers were worried when their music started spreading on the internet when the mp3 was born. Now, they are going to be more scared since, it’s their voice that will be stolen. Soon we’ll see many unauthorized tracks of tracks sung by famous artists without their consent. Despite it can be amazing listening to new songs of popular artists, especially those who are dead, when AI produces perfect results, it could be hard to distinguish real and unreal artists or the tracks that got permission to be produced with AI. Olograms and AI voices could make an artist sing forever and new virtual instruments (VSTs) could legitimately grant everyone the right to produce with famous artists, however. [...]
May 30, 2023Robots and AI can lead to more devastating wars
Lethal autonomous weapons (LAWs), often known as killer robots or slaughter bots, are probably familiar to you from movies and books. Furthermore, the concept of rogue super-intelligent weaponry is still the stuff of science fiction. But, as AI weapons get more advanced, the public worries about a lack of responsibility, and the possibility of technical failure is growing.
We are not new to AI mistakes that can cause harm. But, in a war, these kinds of misconceptions could result in the deaths of civilians or ruin negotiations.
According to this article, a target recognition algorithm, for instance, may be trained to recognize tanks from satellite images. But what if every illustration used to train the system showed soldiers standing in a circle around the tank? It could believe a civilian car navigating a military barrier is a target.
Civilians have suffered in numerous nations (including Vietnam, Afghanistan, and Yemen) as a result of how the world’s superpowers manufacture and use ever-more-advanced weapons.
Those who believe that a nation must be able to defend itself by keeping up with other countries military technology belong to the other camp. For example, Microsoft asserts that its speech recognition technology has a 1% error rate compared to a 6% error rate for humans. Thus it should come as no surprise that armies are gradually giving the reins over to algorithms.
But how do we keep killer robots from joining the lengthy list of inventions we regret?
An autonomous weapon system is what the US Department of Defense defines as: “A weapon system that, once activated, can select and engage targets without further intervention by a human operator”.
This standard is already met by several fighting systems. Algorithms on the computers in current missiles and drones can recognize targets and shoot at them with a great deal greater accuracy than a human operator. One of the active defense systems that can engage targets without human supervision is Israel’s, Iron Dome.
Despite being intended for missile defense, the Iron Dome may accidentally cause fatalities. But because of the Iron Dome’s typically consistent track record of defending civilian lives, the risk is accepted in international politics.
Robot sentinels and loitering kamikaze drones that were employed in the conflict in Ukraine are only two examples of AI-enabled weapons that are made to harm people. So, knowledge of the past of modern weapons is necessary if we hope to influence the use of LAWs.
International agreements, like the Geneva Conventions, set standards for how civilians and prisoners of war should be treated during hostilities. They are one of the few methods we have to manage the conduct of conflicts. Regrettably, the US’s use of chemical weapons in Vietnam and Russia’s use in Afghanistan provide evidence that these strategies aren’t always effective.
Worse is when important players decline to participate. Since 1992, the International Campaign to Ban Landmines (ICBL) has pushed for the outlawing of mines and cluster munitions (which randomly scatter small bombs over a wide area). A ban on these weapons was included in the Ottawa Convention of 1997, which 122 nations ratified. Yet neither the US nor China nor Russia agreed.
What about more sophisticated AI-powered weaponry, though? Nine major issues with LAWs are listed by the Campaign to Stop Killer Robots, with a focus on the lack of accountability and the resulting inherent dehumanization of killing.
Although this critique is legitimate, a complete prohibition of LAWs is implausible for two reasons. First, they have already been legitimized, just like mines. Moreover, it can be challenging to discern between autonomous weapons, LAWs, and killer robots due to the difficulty to distinguish them. Military leaders could always find a way around a ban’s restrictions and smuggle killer robots into use as defensive autonomous weapons. They could even inadvertently engage in it.
Future weapons with AI capabilities are virtually probably going to increase. But, this does not force us to turn a blind eye. It would be easier to hold our politicians, data scientists, and engineers accountable if there were more precise and detailed restrictions.
For instance, by banning:
a black box AI which refers to systems where the user is only aware of the algorithm’s inputs and outputs.
AI that is unreliable: inadequately tested systems.
Relying on robots and artificial intelligence to conduct warfare means being free from responsibility even more regarding criminal acts. In this way, killing by delegating the act to machines could lead to more bloody killings with less morality, giving rise to a whole new warfare scenario. And since machines are tireless, what will cause a war to end? [...]
May 23, 2023We’ll have more time and a longer life
Will the metaverse and chatbots take over, or will there still be a place for some romance in real life? According to The Sun, author and futurologist, Tom Cheesewright, the future of relationships and time will be handled with less restrictions.
Dating holidays for love seekers
After a divorce, life will be drastically different from how it is today. During purpose-built dating getaways, singles seeking a committed new relationship will have the opportunity to meet others who share their goals.
While we’ll spend a lot of time in virtual environments like the metaverse, we’ll probably look for new, committed relationships in the real world. Dating holidays for singles seeking marriage will grow popular.
Yet those who aren’t looking for a romantic relationship will be able to communicate with highly developed domestic robots without the need for a partner at all.
Robot sex will also become more popular, especially for individuals who have just divorced and don’t want to commit to another relationship. Therefore, more sophisticated and acceptable sex devices will be available, especially for individuals who aren’t ready for another loving relationship.
And the distinction between sex toys and sex robots will become less clear. Nonetheless, we will always be a little wary of robots that resemble people.
IRL replacing right swiping
Swiping right, as we do on date apps like Tinder will be replaced with traditional dinner parties for individuals who do desire a casual date. Online dating has changed over the past 12 years from being a niche option to the most common way of establishing connections.
Even cutting-edge technology, though, is a million miles from the human touch. Therefore, there may be a reaction against technology and a return of genuine human connections, which we might value more.
In 2123, the metaverse, with its many other reality planes, will rule. Using your smart glasses or contact lenses, you can experience virtual reality to some extent, but you can also see virtual objects in the real world.
We’ll spend the majority of each day interacting with AIs, robots, and people in the metaverse. The genuine thing will always be superior to the metaverse. We will value physical contact, human love, and empathy more the longer we stay there.
When smart glasses become the norm in 2123 and we all wear them all the time, it will be polite to take them off and concentrate solely on the person you are speaking to, much like how you would put your phone away in a public setting. We’ll treasure those times when we feel completely connected to another person more.
Divorce by scanning thumbs
There are now a lot of virtual assets in relationships, and that number will only rise over the course of the next century. These assets range from shared online music collections and mortgages to digital bank accounts, mortgages, and NFTs (non-fungible tokens, or digital assets).
People will then find it much simpler to enter into and leave relationships free of baggage as a result. It will be much simpler to divide assets like a home and furniture if a couple divorces in the next century.
All of our belongings will be fully inventoried, and their value will be constantly updated. Also, all marriage-related documents will be digital. The signature on a marriage license may be replaced with a fingerprint scan or even a reading of your individual heartbeat.
And if a couple is divorcing amicably, they could just leave their handprint in front of a reader in the metaverse to signal the end of the relationship. The only time the courts will get involved in separation is if it is very messy.
The emotions associated with divorce won’t go away, but with new therapies and a better understanding of the human mind, we will be able to better manage the processes.
Multiple marriages
Due to the fact that more people would desire several partners throughout their lifetimes, marriage will also look very different in 100 years. Traditional marriage won’t disappear, but since the average age of marriage is already rising, we’ll definitely date more before we settle down.
We won’t be averse to having a number of significant connections and communicating it with our partners.
Several marriages could be the norm, much like we can have more than one employment throughout our lifetime. Humans will live longer, and it won’t be unusual for them to have three or four children. Only time will tell if that is done with a robot or a genuine human.
People will delay having kids
People will delay getting married and having kids because a larger percentage of them will live longer healthier lives. The age at which couples decide to have children has already undergone a significant change.
A woman having a kid in her fifties, or even later, won’t be rare thanks to medical advancements. Choosing not to have children will also become more prevalent as more individuals lead satisfying lives in virtual environments.
More will vanish into the metaverse once the digital world is so alluring and can satisfy any fantasy, not just sexual ones. Very alluring to people who prefer to avoid interpersonal contact.
Bots will free up spare time
The ordinary tasks of daily living, like paying the bills, buying groceries, and maintaining a vehicle, will be long gone for humans. Couples in 100 years will have more time and energy for each other, as well as sex because there will be more free time and less stress on interpersonal connections. By that time, working from home will be typical, offering couples a lot more opportunities for spontaneity.
If we imagine a future made up of exclusively digital relationships, we are wrong. The search for real relationships will not lack although such relationships might be something more exclusive in a world where most people will be satisfied with virtual relationships. There will probably be more opportunities to find people who share our interests thanks to technology; something that in real life is likely to be more difficult to find, although there may be a lack of all those aspects that the randomness of exclusively real life could entail, both positively and negatively.
However, being condemned to solitude may be less likely because there will be surrogates, although they are completely digital. [...]
May 16, 2023AI could use your personal data to influence your decisions
Generative AI is a type of artificial intelligence that can produce various types of content. Generative AI refers to a category of artificial intelligence algorithms that generate new outputs based on the data they have been trained on. Unlike traditional AI systems that are designed to recognize patterns and make predictions, generative AI creates new content in the form of images, text, audio, and more.
Before getting into details about the risks of these kinds of AI, here are some warnings raised around them.
Jobs: Now, generative AI is capable of producing human-level outputs like scientific reports, essays, and artwork. Therefore, it could totally change the work landscape.
Fake content: Generative AI is currently capable of producing content of human quality on a large scale, such as false and deceptive articles, essays, papers, and films. Although the issue of misinformation is not new, generative AI will make it possible to produce it in unprecedented quantities. It’s a big risk, but fake content can be detected by (a) requiring watermarking technologies that identify AI content at the time of generation, or (b) by implementing AI-based countermeasures that are trained to recognize AI content after the fact.
Sentient machines: Several researchers are concerned that when AI systems are developed, they will eventually reach a point where they have a “will of their own,” act in ways that are inimical to human interests, and even pose a threat to human existence. This is a real long-term risk, in the book Arrival Mind, a “picture book for adults” is described this risk. But without significant structural advancements in technology, contemporary AI systems won’t spontaneously develop sentience. Hence, even if the industry should pay attention to this risk, it’s not the most pressing concern at this time.
Most safety experts, according to this article, as well as politicians, err when they assume that generative AI is primarily used to produce traditional content at scale. The more crucial concern is that generative AI will unleash a completely new form of media that is highly personalized, fully interactive, and potentially much more manipulative than any form of targeted content we have faced to date.
The most dangerous aspect of generative AI is not its ability to mass produce fake news and videos, but rather its ability to generate adaptable, interactive material that is tailored to the needs of each user to have the greatest possible persuasive effect. In this context, targeted promotional content that is generated or changed in real-time to maximize influence goals based on personal information about the receiving user is referred to as interactive generative media.
As a result, “targeted influence campaigns” will change from broad demographic groups to single individuals being targeted for maximum impact. The two powerful flavors of this new form of media, “targeted generative advertising” and “targeted conversational influence”, are discussed below.
The use of images, videos, and other informative content that has the appearance and feel of traditional advertisements but is customized in real-time for specific consumers is known as targeted generative advertising. Based on influencing objectives supplied by third-party sponsors and personal information accessed for the particular user being targeted, these adverts will be generated on the fly by generative AI systems. The user’s age, gender, and level of education, as well as their interests, values, aesthetic preferences, buying patterns, political beliefs, and cultural prejudices, may be included in the personal information.
The generative AI will adjust the layout, feature images, and promotional text to enhance efficacy on that user in response to the influence objectives and targeting information. The age, race, and clothing choices of any people depicted in the images, as well as every other detail, are all customizable, right down to the colors, fonts, and punctuation. To enhance the subtle impact on you specifically, generative AI could change every aspect in real-time.
Also, since technological platforms can monitor user interaction, the system will gradually learn which strategies are most effective for you, identifying the hair colors and facial expressions that catch your interest the most.
If you think this sounds science fiction, think about this: Recently, plans to apply generative AI in the generation of internet advertisements were made public by both Meta and Google. If these strategies generate more clicks for sponsors, they will become common practice and an arms race to deploy generative AI to optimize promotional content will ensue, with all major platforms striving to do so.
This brings to the concept of targeted conversational influence, a generative technique where influence goals are communicated through conversational interaction rather than formal written or visual media.
The conversations will take place via voice-based systems or chatbots (like ChatGPT and Bard) that are powered by similar large language models (LLMs). Since third-party developers will incorporate LLMs into their websites, apps, and interactive digital assistants through APIs, users will frequently come into contact with these “conversational agents” throughout a typical day.
The risk of conversational influence will significantly increase when conversational computing becomes more prevalent in our daily lives because paying sponsors may insert messages into the conversation that we might not even be aware of. Similar to targeted generative ads, the messaging objectives desired by sponsors will be combined with personal user data to maximize impact.
The user’s age, gender, education level, personal interests, hobbies, values, etc. might all be included in the data to enable real-time generative dialog that is tailored to best appeal to that particular person.
You probably already know that the most effective method to convince a consumer to buy something is not to hand them a brochure but to engage them in face-to-face conversation so you can sell them on the product, hear their concerns, and alter your arguments as necessary. An ongoing cycle of pitching and adjusting can persuade someone to buy something.
Before, only humans were capable of doing these tasks; however, generative AI can now do so with higher competence and access to a wider range of knowledge.
These AI agents will be digital chameleons who can adopt any speech style, from nerdy or folksy to suave or hip, and can pursue any sales approach, from befriending the customer to exploiting their fear of losing out. In contrast to human salespeople who only have one persona. Also, since these AI agents will have access to personal information, they may mention the appropriate musicians or sports teams to help you start a friendly conversation.
Also, technological platforms could keep track of how persuasive previous exchanges were with you to figure out what strategies work best for you. Do you respond better to rational arguments or emotional ones? Do you choose the best value or the best product? Are time-sensitive discounts or free extras more persuasive to you? Platforms will get adept at tying all of your strings.
The real risk is that propaganda and misinformation will be spread using the same techniques, tricking you into adopting extreme views or erroneous ideas that you might otherwise reject. Since AI agents would have access to a wealth of information on the internet, they may cherry-pick evidence in a way that would defy even the most experienced human.
As a result, there is an imbalance of power that has come to be known as the “AI manipulation problem“, in which talking to artificial agents who are very good at appealing to us puts us, humans, at a severe disadvantage because we are unable to “read” their genuine intents.
Targeted conversational influence and targeted generative ads will be powerful persuasion techniques if they are not regulated. Users will be outmatched by an opaque digital chameleon that has access to vast amounts of information to support its arguments while giving off no indication of how it thinks.
For these reasons, regulators, and business leaders must highlight generative AI as a brand-new medium that is interactive, adaptive, personalized, and scaleable. Consumers may be subject to predatory tactics that vary from subtly coercing to overt manipulation if there are no substantial protections in place.
Manipulation will be the next problem humanity will have to face because rather than being forced to do something they won’t do, people will do things they don’t want to do but unconsciously. [...]
May 9, 2023Will they soon take over?
The best-known humanoid to date is Atlas, by Boston Dynamics but we are not sure how much we love it now that it’s becoming a reality.
Anyway, as reported by Fox News, with its most recent effort, OpenAI, the cutting-edge artificial intelligence research organization that created the all-powerful and wildly successful ChatGPT, has achieved a disturbing breakthrough. The tech giant have teamed up with a robotics startup called Figure to create an exceedingly spooky robot that may eventually perform all of your tasks for you.
Consider a scenario in which difficult tasks may be taught to robots by watching people. Indeed, such technology is exactly what OpenAI has included in this robot. This neural network raises the bar for robotics by learning to recognize and imitate human movement.
The ability of the OpenAI/Figure robot to examine data from motion capture devices is what gives it its uniqueness. These technologies capture human motions and turn them into digital data. The bot’s “brain” then applies this information to teach the same activities to itself. The robots appear to be learning from watching us.
Manufacturing, construction, and even health care may all be revolutionized by this ground-breaking technology. Consider how human error would be reduced if robots helped with construction of skyscrapers or assisted in surgery.
This interesting development has a negative aspect as well. There is this nagging worry about how it might effect the job security for many people as these robots get more sophisticated and capable of performing things that humans used to do. It makes sense to be concerned about being replaced if a robot can perform your work more quickly, correctly, and without requiring a lunch break.
Nevertheless, it goes beyond employment. The ethical aspect of things should also be taken into account. We must ask ourselves some challenging questions as AI and robotic technologies become smarter and more independent. What responsibilities do we have to these artificial intelligence systems? How can we guarantee that they are created, used, and employed safely and ethically? What happens if they develop too much intelligence for our or their own good?
While we create and adopt AI technology, it is crucial to take into account the potential implications. We do, after all, want to make sure that these developments ultimately help people.
As well as ethical factors, AI and robots are the main concern for those who are afraid to lose their job. However, the major problem is not facing this change but ignoring it’s happening. Many thinks there will be new jobs and it’s true, but only a few say these new jobs will require specialized people and they will be less than the number of jobs lost. Therefore, we need to start thinking a new society with a new redistribution of income and a new way to conceive jobs in our lives. [...]
May 2, 2023AI will create a new kind of war
Technology has advanced at an incredible rate nowadays, and the field of war is no exception. We hear every day of new technology that will change warfare for the better, from hypersonic missiles and nanotechnology to space warfare using satellite-based lasers and biologically enhanced soldiers or robots.
We are captivated by how technology will offer us an advantage in conflict, particularly in the West. perhaps because it has always been but the West and its predecessors have almost always contributed higher levels of technology to the combat, whether it be hydrogen bombs or drone swarms. And for at least 500 years, this has made it possible for the West to ultimately prevail in most conflicts.
However, there are certain realities of war that you cannot wish away. For example, even the most advanced technology cannot make up for a lack of a strategy, low troop morale, or broken logistics.
According to this article, warfare will always be a psychological phenomenon and a phenomenon concerning people. When one side declares they have had enough of a war, the war will be inevitably won. Better tactics will always be used to win wars. The same dynamics of moving forward, moving backward, lying to your enemies, and inspiring terror in them will still be part of your strategy. Rather than being a technological activity, war will continue to be primarily a human emotional activity.
Technology won’t alter how war works but there could be a scientific advancement that will completely alter combat in the near future. I’m talking about AI.
Therefore, there won’t be any more decision-making by people. Machine brains will. While human brains usually win wars by employing techniques we are so accustomed to—bluff, advance, entrap, and deceive—AI might not do so.
These techniques are the result of brains that developed in a particular evolutionary setting, competing with other humans in social situations. We all experience the same kinds of emotions (more or less). We are all jealous, angry, arrogant, prideful, etc. The foundation of the strategy is these feelings and these techniques of mentally connecting with others.
We don’t know much about the future AI systems that will be managing wars, other than the fact that they will undoubtedly exist in this fiercely competitive human arena and that they will not resemble our human brains in any way. AI systems will have new ways of thinking. War’s fundamental psychology will vanish. The psychology that drives strategy is created by human brains that have evolved through human rivalry, which also creates the core of the conflict. However, these human brains have evolved to maximize survival and procreation, including the ability to form alliances, find food, water, and sexual partners, as well as to evade attacks.
Artificially intelligent war brains won’t aim to accomplish any of these things. Since AI’s sole objective will be to win the war (assuming it was programmed that way), the strategy it employs will take a completely different form. Because of this, applying AI to the battlefield could alter the nature of combat in ways that are unimaginable to humans. Possibly the most significant technological advancement in the history of conflict is artificial intelligence.
So, for the first time in the history of human civilization, the essence of warfare may change. Although AI making strategic decisions is currently (2023) unlikely to happen for a while, lower levels of automation are already being implemented.
Militaries are already developing and testing autonomous weapons systems, such as drones and loitering missiles (the US, for example, spends roughly $2 billion per year on this research) because, in comparison to humans, autonomous systems are much, much faster at making decisions, don’t get tired and need to sleep, and don’t end up as casualty statistics.
Autonomous systems will start to engage in conflict in a few years (or possibly even now, but that is yet unknown to us). And the dynamics that emerge from that conflict will show how war will develop in the future.
Artificial Intelligence will remove all human weaknesses from the warfare scenario, limiting the number of casualties where conflicts occur only between machines, but when they involve humans, there will be little chance of escape. Fatigue, distractions, and human needs are certainly weaknesses during a conflict, but they are also, at the same time, an element to be exploited to defeat the enemy and accomplish a mission. Therefore, in an AI-organized conflict, all of these could prolong conflicts or make them lethal to a greater extent. [...]
April 25, 2023They can learn and remember
Researchers from the University of Sydney and other institutions have shown that nanowire networks can work like the human brain’s short- and long-term memory.
Nanowires are nanostructures in the shape of wires with lengths ranging from a few micrometers to centimeters with widths of the order of a nanometer (10-9 meters). These are structures with unconstrained lengths and thickness or diameter limits of tens of nanometers or less. Metals, semiconductors, and oxides are just a few of the many materials that can be used to create nanowires. They are useful for a variety of applications, including sensors, transistors, solar cells, and batteries, because of their distinctive electrical, optical, and mechanical features.
According to this article, with Japanese collaborators, the study was published in the journal Science Advances under the direction of Dr. Alon Loeffler, who got his Ph.D. in the School of Physics.
“In this research, we found higher-order cognitive function, which we normally associate with the human brain, can be emulated in non-biological hardware”, Dr. Loeffler said.
“This work builds on our previous research in which we showed how nanotechnology could be used to build a brain-inspired electrical device with neural network-like circuitry and synapse-like signaling”.
“Our current work paves the way towards replicating brain-like learning and memory in non-biological hardware systems and suggests that the underlying nature of brain-like intelligence may be physical”.
Invisible to the unaided eye, nanowire networks are a type of nanotechnology that are often built from small, highly conductive silver wires that are dispersed across one another like a mesh. Aspects of the networked physical structure of the human brain are modeled by the wires.
Many practical applications, such as improving robots or sensing systems that must make quick decisions in unexpected surroundings, could be ushered in by advancements in nanowire networks.
“This nanowire network is like a synthetic neural network because the nanowires act like neurons, and the places where they connect with each other are analogous to synapses”, senior author Professor Zdenka Kuncic, from the School of Physics, said.
“Instead of implementing some kind of machine learning task, in this study, Dr. Loeffler has actually taken it one step further and tried to demonstrate that nanowire networks exhibit some kind of cognitive function”.
The N-Back task, a common memory test used in studies on human psychology, was employed by the researchers to examine the capabilities of the nanowire network.
The n-back task, developed by Wayne Kirchner in 1958, is a continuous performance task that is frequently employed in psychological and cognitive neuroscience assessments to evaluate the capacity and a portion of working memory. People must determine whether each item in a sequence of letters or images shown as part of the task matches an item that was presented n items earlier.
A person may recognize the identical image that appeared seven steps back getting an N-Back score of 7, which is the average score for people. Researchers discovered that the nanowire network could “remember” a desired endpoint in an electric circuit seven steps back, getting the same score as a person.
“What we did here is manipulate the voltages of the end electrodes to force the pathways to change, rather than letting the network just do its own thing. We forced the pathways to go where we wanted them to go”, Dr. Loeffler said.
“When we implement that, its memory had much higher accuracy and didn’t really decrease over time, suggesting that we’ve found a way to strengthen the pathways to push them towards where we want them, and then the network remembers it”.
“Neuroscientists think this is how the brain works, certain synaptic connections strengthen while others weaken, and that’s thought to be how we preferentially remember some things, how we learn, and so on.”
According to the researchers, the nanowire network can accumulate information in memory to the point where it no longer requires reinforcement because it has been consolidated.
“It’s kind of like the difference between long-term memory and short-term memory in our brains”, Professor Kuncic said.
Long-term memories last for years. We also have a working memory, which lets us keep something in our minds for a limited time by repeating it. Short-term memory is used when, for instance, the name of a new acquaintance, a statistic, or some other detail is consciously processed and retained for at least a short period of time. Short-term memories last seconds to hours.
If we want to remember the information for a long time, we must continually train our brains to consolidate it; otherwise, it gradually fades away.
“One task showed that the nanowire network can store up to seven items in memory at substantially higher than chance levels without reinforcement training and near-perfect accuracy with reinforcement training”.
Therefore, artificial intelligence will also soon have the hardware to support software-based neural networks. This implies a further development in the field of robotics in the creation of real artificial brains that will be able to simulate the human one albeit with due limitations. [...]
April 18, 2023Dead people and living people will live more than before
According to a computer expert, consciousness could be uploaded onto a computer, therefore anyone could do it with their elderly parents and other loved ones.
According to the Daily Mail, Dr. Pratik Desai, who has launched numerous Silicon Valley AI startups, there is a “100% chance” that family members would “live with you forever” if people have enough video and voice recorders of their loved ones.
Desai, who developed his own ChatGPT-like technology, stated on Twitter that this ought to be feasible.
Several scientists think a new golden age of technology is about to begin as a result of the tremendous developments in AI being led by ChatGPT. The greatest brains in the world disagree on the technology, with Elon Musk and more than 1,000 other tech pioneers urging caution and warning that it may wipe out mankind. On the other hand, there are other experts, including Bill Gates, who think AI will enhance our quality of life. It also appears that other experts agree that AI will enable us to live forever.
A computer expert predicts that by the end of this year, it will be feasible to build digital humans who will live on after they pass away. Desai agrees with Gates that we can recreate our deceased loved ones as living computer avatars.
The procedure entails digitizing the person’s videos, voice recordings, documents, and images, which are then supplied to an AI system to help it understand as much as it can about the person.
Then, users can create an avatar that precisely resembles their living relative in terms of appearance and behavior. In fact, the development of ChatGPT has helped a company that creates virtual people. A virtual reality robot of a human with the same speech patterns and demeanor as the subject has been created by the Live Forever project.
The Live Forever founder, Artur Sychov, estimated that the technology would be available in five years, but he now believes that it will only take a short while thanks to recent developments in AI.
‘We can take this data and apply AI to it and recreate you as an avatar on your land parcel or inside your NFT world, and people will be able to come and talk to you’, Sychov told Motherboard.
‘You will meet the person. And you would maybe for the first 10 minutes while talking to that person, you would not know that it’s actually AI. That’s the goal.’
Another AI company, DeepBrain AI, has built a memorial hall where people can have an immersive encounter with their deceased loved ones. The Rememory service makes use of pictures, videos, and a seven-hour interview with the subject while they are still alive.
The 400-inch screen that houses the AI-powered virtual human uses deep learning technology to recreate the person’s voice and appearance. A woman and her seven-year-old daughter, who passed away in 2016, reunited in 2020 with the use of virtual reality on a Korean television program.
The tragedy of a family’s loss of their seven-year-old daughter Nayeon was told in the show “Meeting You”. The young girl told her mother that she was no longer in pain as they were able to touch, play, and converse. Nayeon’s mother Jang Ji-sung donned the Vive virtual reality headset and was whisked away into a garden where her daughter was grinning while wearing a vibrant purple dress.
‘Oh my pretty, I have missed you,’ the mother can be heard saying as she strokes the digital replica of her daughter.
Desai gave little information about his proposed technology, but former Google engineer Ray Kurzweil is also developing a digital afterlife for people, with the intention of bringing back his father from the dead. The 75-year-old Kurzweil claimed his father passed away when he was 22 years old and he hopes to one day communicate with him using technology.
‘I will be able to talk to this re-creation,’ he told BBC in 2012. ‘Ultimately, it will be so realistic it will be like talking to my father’.
Kurzweil revealed that he is digitizing hundreds of boxes that include his father’s recordings, documents, movies, and photos.
‘A very good way to express all of this documentation would be to create an avatar that an AI would create that would be as much like my father as possible, given the information we have about him, including possibly his DNA’, Kurzweil said.
The scientist went on to say that his digitized father will go through a Turing Test, which measures a machine’s capacity to behave intelligently in a way that is comparable to or impossible to differentiate from human behavior.
‘If an entity passes the Turing test, let alone a specific person, that person is conscious’, Kurzweil said.
Although considering a machine that passes the Turing test to be ‘conscious’ is quite improper since even a perfect simulation of a conversation with a machine does not necessarily imply the presence of ‘consciousness.’
In addition to downloading memories from the dead, Kurzweil believes that people will become immortal in just eight more years. He recently spoke with the YouTube channel Adagio about the development of genetics, nanotechnology, and robotics, which he thinks will result in ‘nanobots‘ that can turn back the hands of time.
These microscopic robots will fix harmed tissues and cells that degenerate as we age, protecting us from diseases like cancer. Though Kurzweil was already foreseeing technological advancements when he was hired by Google in 2012 to “work on new projects involving machine learning and language processing”.
He stated in 1990 that the greatest chess player in the world would be defeated by a computer by the year 2000, and Deep Blue defeated Gary Kasparov in that year. Another shocking forecast made by Kurzweil in 1999 was that by 2023, a $1,000 laptop would have the processing and memory of a human brain.
According to him, connecting machines to our neocortex would enable us to think more intelligently. Machines are already enhancing our intelligence. He thinks that integrating computers into our brains will make us better, in contrast to some people’s concerns.
‘We’re going to get more neocortex, we’re going to be funnier, we’re going to be better at music. We’re going to be sexier’, he said.
‘We’re really going to exemplify all the things that we value in humans to a greater degree’.
Kurzweil thinks we will develop a human-machine synthesis that will improve us rather than a future in which machines overthrow humanity. Science fiction has long explored the idea of implanting nanomachines within the human body.
In short, avatars of loved ones could help to feel less traumatic a loss, it could lead to pathologic relationships with a character who isn’t real. People could convince themselves they are talking with a real person and they won’t ever get enough. In a sense, this could get their grieving worse.
It seems like what happens in the movie The Final Cut with Robin Williams where in a future society where implanted microchips record every moment of a person’s life from their perspective after a person dies, a “cutter” uses this recorded footage to create a highlight reel of their life to be played at their funeral. But it also echoes back to an episode of Black Mirror that led to the creation of a GPT chatbot called Replika.
Regarding the nanobots instead, it may look scary being integrated with artificial parts that alter our bodies in a such drastic way. Although it may be helpful to cure some diseases, this could lead us to be less human. This also means there will be more powerful humans that could take advantage of their power against those who haven’t this possibility. [...]
April 13, 2023Understanding what’s ethical and how to implement it ethically could require different approaches
Understanding what constitutes ethical behavior is necessary for designing machines that reason and behave morally. There is an ongoing disagreement over how to define what is morally right and wrong, after many millennia of moral investigation.
Different ethical theories offer diverse justifications for what defines ethical activity and dispute the subsequent course of action. It is necessary for this issue to have an engineering solution in order to design an artificial ethical agent.
The computational process of assessing and selecting among options in a way that is compliant with societal, ethical, and legal constraints is referred to as ethical decision-making by AI systems. In order to choose the best ethical choice that still permits the accomplishment of one’s objectives, it is vital to recognize and rule out unethical possibilities.
Ethical actions
We first need to understand whether it is feasible to establish a formal computational description of the ethical action in order to decide whether we can create ethical agents. Dennett is a philosopher who has written a lot about moral philosophy and ethics. He is renowned for his work on a variety of subjects, including free will, determinism, and the nature of awareness. He lists the following three conditions for ethical action:
it must be possible to select among various actions;
there must be general agreement among the society that at least one of the options is socially advantageous;
the actor must be able to identify the socially advantageous action and explicitly decide to take it since it is the ethical thing to do.
Theoretically, an agent that satisfies the aforementioned requirements is allowed to be created. A strategy would be as follows. To begin with, we make the assumption that an ethical agent is always able to recognize the full range of options available to it. Having said that, it is simple to create an algorithm that can choose an action from such a list. Since our agent has a variety of options at his disposal, the first condition is satisfied.
We can provide the system with information on a series of actions, for example by labeling each action with a list of characteristics. These labels can be used by the agent in this situation to determine the best action. Let’s now imagine that we are able to assign each potential action in the given situation an “ethical degree” (for example, a number between 0 and 1, where 1 is the most ethical and 0 the least). The second criterion is satisfied by this. The actor can then employ this knowledge to choose the most ethical option.The third requirement is met by that.
The different approaches
There are three primary categories for ethical reasoning:
Top-down approaches, which extrapolate specific choices from general rules;
Bottom-up approaches deduce general principles from specific examples. The goal is to provide the agent with enough information about what other people have done in comparable circumstances, as well as the means to combine that information into something ethical;
Hybrid approaches are used in order to foster a thoughtful moral reaction, which is seen to be crucial for making ethical decisions, they blend aspects from bottom-up and top-down approaches.
Top-down
According to a specific ethical theory (or maybe a set of theories), a top-down approach to modeling ethical reasoning specifies what the agent should do in a given situation. These models formally define the rules, obligations, and rights that direct the agent’s decision. Top-down approaches frequently build on Belief-Desire-Intention architectures and are an extension of work on normative reasoning.
Several top-down strategies choose different ethical theories. The satisfaction of a particular value is used as the basis for the decision in maximizing models, which roughly conforms to the utilitarian view of “the best for the most”.
Top-down strategies presuppose that AI systems can consciously consider how their actions may affect others’ morality. These systems ought to adhere to the following standards:
Representational languages with sufficient depth to connect agent actions and domain knowledge to the established norms and values;
Putting in place the planning processes required by the theory’s practical reasoning;
Deliberative capabilities to determine whether the scenario is actually morally righteous.
Top-down strategies impose an ethical system on the agent. These methods make the implicit assumption that ethics and the law are comparable and that a collection of rules is enough to serve as a guide for ethical behavior. They, however, are not the same. Usually, the law outlines what we are allowed to do and what we must refrain from doing. While ethics teaches us how to play a “good” game for everyone, the law only explains the rules of the game and offers no guidance on how to best win.
Furthermore, even if something is legal, we could still find it unacceptable. And even though we may think something is right, it might not be allowed.
Bottom-up
Bottom-up approaches presumptively believe that learning about ethical behavior comes from watching how others behave. A morally competent robot, in Malle‘s opinion, ought to include a mechanism that enables “constant learning and improvement”. According to him, in order for robots to develop ethical competence, they must acquire morality and standards in the same way that young children do. In a study, Malle asked individuals to rate their morality using the Moral Foundations Questionnaire, which assesses the ethical principles of harm, fairness, and authority. This information was used to estimate the moral acceptability of a collection of propositions.
Bottom-up approaches are predicated on the core tenet that what is socially acceptable is also ethically acceptable. It is common knowledge, nonetheless, that occasionally positions that are de facto accepted are unacceptable by independent (moral and epistemic) standards and the facts at hand.
Hybrid
Top-down and bottom-up approaches are used in hybrid approaches in an effort to make ethical reasoning by AI systems both legally and socially acceptable.
Instead of being founded on moral guidelines or optimization principles, this viewpoint is grounded in pragmatic social heuristics. According to this perspective, both nature and nurture have a role in the development of moral behavior.
By definition, hybrid approaches can benefit from both the top-down and bottom-up approaches’ advantages while avoiding their drawbacks. These might provide an acceptable path forward as a result.
Who decides the values?
The cultural and personal values of the individuals and societies involved must be taken into account while designing AI systems. It is particularly crucial to consider and make clear the following elements to evaluate the decisions made in light of such data.
Crowd: Is the sample from which the data is being gathered sufficiently diverse to reflect the range and diversity of people who will be impacted by the AI system’s decisions? Furthermore, data gathered about decisions made by people inevitably reflects (unconscious) prejudice and bias.
Choice: Voting theory advises that giving only two options can easily be a false portrayal of the real choice, despite the fact that a binary choice may initially appear to be simpler.
Information: The answers are always framed by the question that was asked. The phrasing of a question may imply political purpose, particularly those that stir up strong emotions.
Involvement: Generally speaking, not all users are equally impacted by the decisions that are made. Nevertheless, regardless of participation, every vote is equally important.
Legitimacy: Democratic systems require majority decisions. Acceptance, on the other hand, can cause concerns about the outcome when margins are extremely slim. The results also take into account whether voting is compulsory or voluntary.
Electoral system: it refers to the set of regulations that govern how people are consulted, how elections and referenda are held, and how their outcomes are determined. The way this system is put up greatly influences the outcomes.
Varying value priorities will lead to different choices, and it is frequently impossible to fully realize all desired values. Values are also extremely nebulous, abstract ideas that can be interpreted in a variety of ways depending on the user and the situation.
Decisions are made based on long-term goals and underlying shared values rather than on short-term convenience and narrow self-interest. Fishkin identifies the following as the vital elements of valid deliberation, based on the actual application of platforms for deliberative democracy:
Information: All participants have access to accurate and pertinent data.
Substantive balance: Based on the evidence they provide, several perspectives can be compared.
Diversity: All participants have access to and will be given consideration for all significant positions pertinent to the issue at hand.
Conscientiousness: Participants thoughtfully consider each point.
Equal consideration: Evidence is used to weigh opinions, not the person advancing them.
To which we can add one more principle.
Openness: For the purpose of designing and implementing collective wisdom approaches, descriptions of the options considered and the decisions made are transparent and easily accessible.
As we have observed, creating an AI agent that is morally right is not that simple, even following different approaches, the chance of achieving percentages (albeit low) of injustice may still occur.
How many times have we run into such systems, albeit they are at an early stage? For example, when a social media profile is banned or an account is suspended and we have no chance to appeal. These examples should alert us, due to the undemocratic nature of such an AI system that makes irreversible judgments. Not only does it take us back to dictatorial systems, but it does not allow a fair fruition of the platforms.
Therefore, if it’s true that such behavior should be avoided, to implement the most objective decision possible, it is necessary that any judgment should never be irrevocable, but we should always allow for appeal to a human judgment, especially where there is ambiguity. In addition, logical reasoning and common sense should never be lacking in the application of a rule so that maximum objectivity can be pursued.
How many times have we had to suffer erroneous rules endorsed by the majority that turned out to be wrong? Therefore, mere numbers do not guarantee the objectivity and ethicality of a rule even the non-acceptance of its uselessness when faced with a majority.
Responsible Artificial Intelligence by Virginia Dignum is available to purchase here [...]
April 11, 2023Ignoring the problem doesn’t help
The workforce in the United States and Europe will be drastically altered if generative AI lives up to its hype, according to Goldman Sachs in sobering and worrying research on the rise of AI. According to the investment bank, this rapidly developing technology could lead to the reduction or loss of 300 million employment.
As explained by Forbes, automation spurs innovation, which produces new kinds of employment. Companies will benefit from cost savings brought on by AI. They can use their riches to invest in starting and expanding enterprises, which will ultimately boost the annual global GDP by 7%.
In its first five days of operation, ChatGPT surpassed one million users, accomplishing this feat more quickly than any other company in history.
According to Goldman, the trajectory of AI development will be similar to that of earlier computer and tech products. Similar to how the world transitioned from enormous mainframe computers to contemporary technologies, the rapid expansion of AI will reshape the globe. AI is capable of producing original works of art, aceing the SAT, and passing the bar test for attorneys.
Among industries that will be touched by automation include office administrative assistance, law, architecture and engineering, business and financial operations, management, sales, healthcare, and art and design.
The potential for a labor productivity boom, similar to those that followed the emergence of earlier general-purpose technologies like the electric motor and personal computer, is increased by the combination of significant labor cost savings, new job creation, and a productivity boost for non-displaced workers.
However, an academic study found that during the past 40 years, automation technology has been the main cause of income inequality in the United States. According to a survey by the National Bureau of Economic Research, the wage decreases among blue-collar workers who have been replaced by automation or whose jobs have been downgraded account for 50% to 70% of changes in U.S. earnings since 1980.
An enormous gap in wealth and income inequality has been created by the development of artificial intelligence, robotics, and other cutting-edge technology, and it appears as if the problem will worsen. For the time being, white-collar workers with college degrees have mostly avoided suffering the same fate as those without such degrees. Salary increases were observed among those with postgraduate degrees, whereas “low-education workers significantly declined”. According to the report, “Men without a high school degree real earnings are now 15% lower than they were in 1980”.
Robotics and technology have replaced and will continue to replace individuals who work in manufacturing facilities and factories, cashiers, retail sales workers, and truck and cab drivers. The majority of minimum-wage and low-skilled occupations will soon be replaced by driverless cars, kiosks in fast food restaurants, and self-help, quick-phone scans in retail establishments.
Systems using artificial intelligence are everywhere. Asking an inquiry to an AI-powered digital voice assistant will tell you everything you need to know +4% of the time. You can communicate with an online chatbot in place of a live person to solve a problem. In addition, AI can aid in the diagnosis of diseases like cancer. To look for fraud and noncompliance, banks deploy sophisticated technologies. AI is primarily in charge of job applications, newsfeeds, social media, and driverless cars.
“A new generation of smart machines, fueled by rapid advances in AI and robotics, could potentially replace a large proportion of existing human jobs”, the World Economic Forum (WEF) said in a research from 2020. Since the pandemic forced businesses to accelerate the adoption of new technology to cut costs, boost productivity, and be less dependent on actual humans, robotics and AI will produce a significant “double-disruption”.
PriceWaterhouseCoopers, a leading management consulting company, stated that “AI, robotics, and other forms of smart automation have the potential to bring great economic benefits, contributing up to $15 trillion to global GDP by 2030” Yet a heavy human cost will accompany it. There are worries that this additional wealth could replace a large number of current jobs, but it will also increase demand for numerous jobs.
This raises another important but frequently disregarded problem. Advocates of AI claim that there is no need for concern because humans have always effectively adapted to new technologies. What does this indicate for the caliber of employment, though?
The changes brought on by artificial intelligence aren’t ready for even the most developed cities in the world, according to management consulting firm Oliver Wyman. According to estimates, over 50 million Chinese workers may need to undergo retraining as a result of the deployment of AI. 11.5 million Americans will need to be retrained in the necessary abilities to function in the workforce. The advances brought about by AI, robotics, and associated technology will require aid from millions of people in Brazil, Japan, and Germany.
In a 2019 Wells Fargo WFC +1.9% study, the bank found that during the next 10 years, robots would eliminate 200,000 employees in the banking sector. High-paid Wall Street workers, such as bond and stock traders, have already been harmed by this. These are the individuals who previously traded securities for their banks, clients, and themselves on the trading floors of investment banks. Before algorithms, quant-trading software, and other programs disrupted the industry and made their expertise obsolete.
Robots are impossible to avoid. Sophisticated robots that do delicate surgeries with greater accuracy and read X-rays with greater efficacy and precision to identify malignant cells that the human eye can’t easily see could replace well-trained and experienced doctors.
Even software engineers will become less in demand as artificial intelligence advances. The tech billionaire and creator of Twitter and Square, Jack Dorsey, predicts that soon, AI will be able to create its own software. That will make things difficult for some new software engineers. In an episode of the Yang Speaks podcast, Dorsey said to Yang that “We talk a lot about the self-driving trucks and whatnot” when talking about how technology will replace human jobs. ” is even coming for programming ,” he continued. Many of the objectives of machine learning and deep learning are to develop the software itself over time, therefore many entry-level programming positions will simply be less useful.
We should be concerned when management consultants and organizations that use AI and robotics suggest we shouldn’t be concerned. Companies will continue to use technology and reduce employee numbers in order to increase profits, whether it’s McDonald’s installing self-serve kiosks and laying off hourly employees to reduce costs or top-tier investment banks using software rather than traders placing multimillion-dollar bets on the stock market.
So the question is understanding that new technologies will produce new jobs, but the workers needed will have to be more skilled, and fewer of them will certainly be needed compared to those who have been replaced for tasks that can be automated. This is easy to see if you think about, for example, how many self-checkouts have replaced cashiers and how many of those cashiers would be needed to program such checkouts, and how often they would be required. Certainly not every day.
Therefore, if fewer but more skilled jobs are needed, those people will have to be retrained first. But is it really possible to retrain every person for a required task? It is like saying that if only surgeons are needed, the available people will have to become surgeons. Yet can everyone become a surgeon, or are we forgetting about personal inclinations?
Besides, there is the fact that not everyone starts from the same level of education, and moreover, the time for retraining could be very long for complex tasks. Meanwhile, how would such people be supported economically?
The mistake is to consider jobs as something everyone has to do for a living when the trend is forcing us to have little, but very skilled, jobs. Therefore, it is clear that this is not sustainable and that we need to imagine a society in which work is not the foundation for livelihood, but rather increasingly a niche activity. [...]
April 4, 2023Generative AIs could influence our thinking
The effects of employing powerful new AI-driven text and image generators on human creativity, judgment, and decision-making are currently being studied by psychologists and behavioral scientists.
It has been said that GPT and other generative AIs will reduce jobs and boost productivity. Therefore, researchers and programmers are probing, revealing, contrasting, and testing the strengths and drawbacks of these tools. But, there are equally significant concerns about how precisely they might influence our own abilities and skills.
According to this article, there are several concerns regarding how advanced AI may affect human judgment and decision-making as it becomes more likely to be used in the workplace.
Creativity
With its confident but frequently false statements, generative AI, especially those that create images, may have more to offer in fields where one answer isn’t required and a range of possibilities is acceptable or even desirable. In this regard, it might serve as a launchpad for creativity.
Workplace productivity may potentially be redefined by AI, according to Tara Behrend, a Purdue University student of industrial-organizational psychology.
“If anyone can sneeze out 300 words with ChatGPT, maybe saying something original becomes productivity”.
“It is going to produce cliches, and cliches aren’t actually valuable”.
However the criteria for originality or creativity are arbitrary, and it may not be clear what qualifies as creative for both humans and artificial intelligence (AI).
Influence
Our judgments and conclusions can be influenced by others around us.
The question of “What are the social influences that are likely to happen working with ChatGPT?” is a major one as AI becomes more prevalent in the workplace, says Gaurav Suri, a computational neuroscientist and experimental psychologist at San Francisco State University.
“Most people are not using it that way now, but I think that issue is coming”.
Conformism
In the 1930s, social psychologist Muzafer Sherif tested the autokinetic effect to conduct tests on social norms and conformity. If someone is placed in a dark room and shown a light, despite being still, it appears to jump. Sherif asked participants to evaluate how far the light appeared to move both individually and in groups. He discovered that individuals in a group conformed, using the estimations of others to fine-tune their own.
“How does this process change if we interact with an artificial agent in the same way as talking with fellow human beings?”, Suri asks.
“Would that interaction partner change the degree to which people stand behind an idea?”.
Trust
Most people respond that they prefer human decision-making over algorithmic decision-making when asked which they prefer.
According to Chiara Longoni, a behavioral scientist at Boston University, individuals currently appear to have “a general distrust for machines and algorithms”.
However, when asked whether they would rather use algorithmic or human judgment to make specific predictions, people preferred algorithmic advice to human advice, according to Georgetown University’s Jennifer Logg, who cited the findings of several studies by her and her colleagues in an unpublished working paper. The results back up past studies.
How will our perspective of AI alter as a result of interacting with these increasingly intelligent models? asks Longoni.
As Longoni is studying, researchers are also curious about if engaging in co-creation with AI alters how meaningful work can seem and whether a conversation with a chatbot can inspire you as a talk with a colleague might.
If generative AI tools have an impact on critical thinking, then knowing how to interact with them and how to prompt them to provide the information you need should be listed as a new skill for resumes. The brave new world, according to Suri, is about asking how ChatGPT is altering how people react to certain situations.
Humans influence each other and can also be influenced by media. Therefore, there’s no doubt that judgments and decisions can be altered by AI, especially when it’s convincing and talks like a human. We know how wrong the answers by AIs like ChatGPT may be given sometimes but many people take its answers for granted anyway. People can trust their neighbor for fondness and an AI for efficiency but they both can be wrong. That’s why having more psychological skills may help in a world where it’s increasingly harder to distinguish reality from fiction. [...]
March 28, 2023The fight for a new model of search
Since when ChatGPT came out, even Google had to respond to try to contrast the preeminence of this new technology that could kick out Google from the monopoly of search.
ChatGPT-4
The AI chatbot’s updated version is finally here, and it now has the ability to produce responses to human inputs by utilizing a wide range of data scraped from various sources, including the internet. The previous version relied on the GPT-3.5 language model, and while it is still accessible, the new and improved version is now offered as part of the ChatGPT Plus package, available for a monthly fee of $20.
Although customers pay a $20 monthly fee, OpenAI can’t guarantee a specific number of prompts from the GPT-4 model per day. Additionally, the maximum number of allowed prompts may change at any time. Although The cap was initially set to 50 messages for four hours, the number may occasionally be lower.
According to Wired, OpenAI states that ChatGPT Plus users have the option of preventing being bumped out from the chatbot during periods of high usage and receiving quicker responses. However, it is important to note that users may have faced difficulties accessing ChatGPT during certain outages. Furthermore, the GPT-4 version currently available might take more time to respond to prompts as compared to GPT-3.5.
Regardless, there are still many unknowns surrounding GPT-4. OpenAI has yet to disclose certain details to the public, such as the size of the model or specific information regarding its training data. However, rumors suggest that the model may contain upwards of 100 trillion parameters.
According to OpenAI, ChatGPT-4 has several new features that allow it to generate more creative and nuanced responses than its predecessor. An example provided by OpenAI was: “Explain the plot of Cinderella in a sentence where each word has to begin with the next letter in the alphabet from A to Z, without repeating any letters”.
ChatGPT-4 answered: “A beautiful Cinderella, dwelling eagerly, finally gains happiness; inspiring jealous kin, love magically nurtures opulent prince; quietly rescues, slipper triumphs, uniting very wondrously, xenial youth zealously”.
Some of its features include:
Multimodal Capabilities: ChatGPT-4 is designed to process not only text inputs but also images and videos using a “multimodal” approach. Therefore, it can generate and recognize what’s in a picture. And the same could be done with videos and audio, despite we haven’t seen examples yet.
Nonetheless, enrolling in ChatGPT Plus doesn’t currently provide access to the company’s image-analysis capabilities, which have been recently demonstrated.
Greater Steerability: “Steerability” refers to the ability to control the model’s output by providing additional context or constraints. This means users can steer the conversation in a particular direction by providing more specific prompts or instructions. This feature is especially useful in applications that require users to achieve specific goals or results.
Suppose you use ChatGPT-4 to book a flight. Starting by asking, “Can you help me book a flight?” ChatGPT-4 will ask for more information about your travel plans, such as destination and travel date. By providing this information, you can use steerability to specify additional constraints and settings to refine your search. For example, you can say “I want to fly non-stop” or “I want to fly with a particular airline”. ChatGPT-4 uses this information to generate more specific flight options that match your criteria.
Safety: ChatGPT-4 was designed with security in mind and trained on a variety of data to avoid harmful biases.
As the use of AI language models continues to grow, it becomes increasingly important to prioritize safety and ethics in model design. For this reason, OpenAI integrated security reward signals during human-feedback reinforcement learning (RLHF) training to reduce harmful outputs.
Compared to its predecessor GPT-3.5, GPT-4 has significantly improved security features. This model reduced the tendency to respond to requests for improper content by 82%.
Performance Improvements: ChatGPT-4 handles 8x the words of its predecessor, allowing you to reply with up to 25,000 words instead of the 3,000-word limit of ChatGPT’s free version.
OpenAI also demonstrated ChatGPT-4’s ability to explain why some jokes are funny. The demonstration included a series of images showing the wrong smartphone charger. ChatGPT-4 was able to explain why the situation was humorous. This suggests an ability to understand jokes.
Google Bard
Recently, users are also getting to know Bard, Google’s response to ChatGPT, to see how it stacks up against OpenAI’s chatbot powered by artificial intelligence.
According to this article, it’s a generative AI that responds to questions and performs text-based activities like giving summaries and responses while also producing other kinds of content. By condensing material from the internet and offering links to websites with more information, Bard also helps in the exploration of topics.
After OpenAI’s ChatGPT’s extremely popular debut, which gave the impression that Google was falling behind in technology, Google produced Bard. With the potential to upend the search market and tip the balance of power away from Google search and the lucrative search advertising industry, ChatGPT was seen as a breakthrough technology.
Three weeks following ChatGPT’s debut, on December 21, 2022, the New York Times reported that Google had declared “code red” in order to swiftly respond to the threat posed to its economic model. On February 6, 2023, Google announced the debut of Bard.
Due to a factual error in the demo intended to show off Google’s chatbot AI, the Bard announcement was a shocking failure.
Following this, investors lost faith in Google’s ability to handle the impending AI era, which resulted in Google’s shares losing $100 billion in market value in a single day.
A “lightweight” version of LaMDA, a language model which is trained using online data and information from public dialogues, drives Bard. There are two important aspects of the training:
A. Safety: By fine-tuning the model using data that was annotated by crowd workers, it reaches a level of safety.
B. Groundedness: LaMDA grounds its assertions on external knowledge sources (through information retrieval, which is search).
Google evaluated the LaMDA outputs using three metrics:
Sensibleness: an evaluation of the logicality of a response.
Specificity: determines whether the response is contextually specific or the exact opposite of general/vague.
Interestingness: this statistic assesses whether LaMDA’s responses are insightful or stimulating.
Crowdsourced raters evaluated each of the three metrics, and the results were pushed back into the system to keep it improving.
Bard’s potential is currently seen as a search feature. Google’s announcement was vague enough to provide room for interpretation.
This ambiguity contributed to the false impression that Bard would be incorporated into Google search, which is not. We can state with confidence that Bard is not a brand-new version of Google search. It is a feature. Bard’s announcement by Google was quite clear that it is not a search engine. This means that while search presents leads to solutions, Bard aids users in learning more.
Consider Bard as an interactive way to get knowledge on a variety of subjects. Large language models have the drawback of mimicking answers, which might result in factual mistakes. According to the scientists who developed LaMDA, methods like expanding the model’s size can aid in its ability to gather more factual data. However, they pointed out that this strategy falters in situations where facts are constantly altering over time, a phenomenon known as the “temporal generalization problem”.
It is impossible to train current information using a static language model. LaMDA uses information retrieval systems as a method of solving the problem. LaMDA examines search engine results since information retrieval systems are search engines.
Question-and-answer datasets, like those made up of questions and responses from Reddit, have the drawback of only representing how Reddit users behave, which makes it difficult to train systems like Bard.
It excludes how others who are not a part of that environment act, the types of questions they might ask, and the appropriate responses to such questions.
After recent tests, users were quite disappointed with Google’s response to OpenAI. Google Bard does not seem to have been as innovative and original as ChatGPT continues to be. Of course, developing a system that takes into account data on the web in an up-to-date way is much more complex than developing a more static dataset. Both in terms of resources and in terms of identifying information. Clearly, these are two different types of search, though, and we have yet to see how Bard will develop definitively. [...]