Technology around us is constantly evolving and compelling us to think about how we live and will live, how society will change and to what extent it will be affected. For the better or the worse? It is difficult to give a clear answer. However, even art forms such as cinema can give us food for thought on society and ourselves, as well as some psychological reasoning. All this to try to better understand ourselves, the world around us, and where we are headed.

The House blog tries to do all of that.

Latest posts
March 21, 2023The fusion of humans and machines can lead us to a problematic future Most people are aware of the plethora of artificial intelligence (AI) apps created to increase our productivity and creativity. We have apps that use text prompts to create art, as well as the contentious ChatGPT, which raises important concerns about originality, errors, and plagiarism. Despite these worries, AI is growing more pervasive and invasive. Other examples were the internet and smartphones. However, unlike previous technologies, many scientists and philosophers believe AI will eventually achieve (or even surpass) human-style “thinking”. The “technological singularity” is a futuristic idea that stems from this possibility as well as our growing reliance on AI. According to this article, the American science fiction author Vernor Vinge popularized this expression a few decades ago. The term “singularity” now refers to a hypothetical moment in the future when artificial general intelligence (AGI), or AI with human-level capabilities, will have advanced to the point where it will permanently alter human civilization. It would herald the beginning of our unbreakable bond with technology. We won’t be able to survive without them after that without losing our ability to operate as humans. Brain implants We only need to go as far as recent breakthroughs in brain-computer interfaces (BCIs) to realize why this isn’t the stuff of fairy tales. Several futurists believe that BCIs are a natural starting point for a singularity because they combine mind and machine in a way that no other technology has been able to do. Neuralink, a company run by Elon Musk, is asking the US Food and Drug Administration for approval to start BCI human trials. Neural connectors would be inserted into the participants’ brains to enable the communication of instructions through thinking. Neuralink wants to help the blind see again and paraplegics walk. Yet there are other aspirations in addition to this. Brain implants, according to Musk, would enable telepathic contact and pave the way for the co-evolution of humans and machines. He contends that if we don’t employ such technology to improve our intelligence, superintelligent AI may wipe humanity out. Musk is not the only one who believes that AI’s skills will rapidly advance. According to surveys, the majority of AI researchers believe that within this century, AI will be “thinking” at the level of humans. They disagree on whether this implies consciousness or not, and if this inevitably implies that once AI reaches this level, it will hurt us. A patient with amyotrophic lateral sclerosis (ALS) could use a minimally invasive device developed by Synchron, another BCI technology company, to write emails and access the internet. Tom Oxley, chief executive officer of Synchron, thinks that in the long run, brain implants may totally alter human communication, going beyond prosthetic rehabilitation. He claimed, when addressing a TED audience, that users may one day be able to “throw” their emotions so that others might experience what they’re feeling. If this is the case, “the full potential of the brain would then be unlocked,” he stated. Early BCI developments could be seen as the first steps toward the hypothetical singularity, in which man and machine merge into one. This need not imply that machines will take on a life of their own or rule over us. But, the integration itself and our subsequent dependence on it have the potential to permanently alter us. It’s also important to note that DARPA, the division of the US Department of Defense responsible for research and development, provided some of the initial financings for Synchron. DARPA is credited with helping to create the internet and it seems sensible to be concerned about the whereabouts of DARPA’s investment funds. AGI Futures expert and former Google innovation engineer Ray Kurzweil believes that AI-enhanced humans could be put onto the autobahn of evolution and sent hurtling onward at crazy speeds. In his 2012 book How to Create a Mind, Ray Kurzweil proposed the theory that the neocortex, the area of the brain thought to be responsible for “higher functions” like emotion, cognition, and sensory perception, is a hierarchical system of pattern recognizers that, if replicated in a machine, could produce artificial superintelligence. He estimates that the singularity will occur by 2045 and speculates that it may usher in a time of super-intelligent people, perhaps even the Nietzschean “Übermensch”, someone who transcends all limitations of the material world to realize their full potential. Yet not everyone believes that AGI is beneficial. Super-intelligent AI, according to the late, brilliant theoretical physicist Stephen Hawking, may bring about the end of the world. Hawking told the BBC in 2014: “the development of full artificial intelligence could spell the end of the human race. It would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded”. Nevertheless, Hawking supported BCIs anyway. A hive mind The concept of the AI-enabled “hive mind” is another one that has to do with singularity. A hive mind is described by Merriam-Webster as: “the collective mental activity expressed in the complex, coordinated behavior of a colony of social insects (such as bees or ants) regarded as comparable to a single mind controlling the behavior of an individual organism”. Neuroscientist Giulio Tononi created the Integrated Information Theory (IIT) to explain this phenomenon. It implies that all of us are moving toward the fusion of all information and thoughts. Galileo’s Error, written by philosopher Philip Goff, does a good job of elaborating on the consequences of Tononi’s idea. “IIT predicts that if the growth of internet-based connectivity ever resulted in the amount of integrated information in society surpassing the amount of integrated information in a human brain, then not only would society become conscious but human brains would be ‘absorbed’ into that higher form of consciousness. Brains would cease to be conscious in their own right and would instead become mere cogs in the mega-conscious entity that is the society including its internet-based connectivity”. It’s important to note that there isn’t much proof that such a thing will ever happen. Yet, the theory raises significant questions regarding the nature of consciousness itself as well as the rapidly advancing technology (not to mention how quantum computing may accelerate this). It is conceivable that the emergence of a hive mind would lead to the end of individuality and the institutions that depend on it, such as democracy. In a recent blog post, OpenAI (the company that created ChatGPT) reaffirmed its commitment to attaining AGI. Undoubtedly, many will do the same. Our lives are increasingly being governed by algorithms in ways that we frequently cannot discern and must thus accept. Many aspects of a technological singularity promise to greatly improve our lives, but the fact that these AIs are the creations of the private industry raises some concerns. They are mostly uncontrolled and subject to the whims of impulsive “technopreneurs” who have access to far more resources than the majority of us put together. No matter if we think they’re crazy, naive, or visionaries, we have a right to know what they have in mind (and be able to rebut them). Although technology and the human body may blend to allow people with diseases to get a better life, it’s weird if we have to do that also for contrasting the power of AIs to not be overwhelmed by their capabilities. It seems our lives are on a train we can’t get off but from which we can only change class. [...]
March 16, 2023When information is more important than selling Capitalism is changing, especially now that we are in an era where technology has reached the power to make not only people interconnected but also more vulnerable, especially to manipulation. Companies try every day to sell their products in a variety of ways. They use psychological tricks to stimulate your ‘needs’. However, the best way to convince or deceive you is to know you. Therefore, they need information about you. The power of A.I. is able to collect data from different sources and use that information to predict your behavior. Nonetheless, they don’t limit their capabilities to predicting your moves and offering you something to buy, they can manipulate your actions and choices to lead you to their desired aim. Data can be collected from many independent sources such as search history, social media, and even your physical movements. Although this information looks unrelated, it can be used to calculate a personal profile with the tastes and desires of a person. This information is defined as ‘behavioral surplus‘ because is collected from the additional behavior we have on the web more than that we have in real life. However, even more ‘physical’ movements could be analyzed and used to get a better analysis of our profile. This enables companies to increase their profits by creating more effective advertising, offering personalized recommendations, and even manipulating our behavior in subtle ways. Prediction is therefore a key component of the so-called ‘surveillance capitalism‘, meaning that companies make money not only by convincing people through marketing and adverts but also by monitoring your life to draw on your personal clues to lead you to the point where you’ll make a decision you believe it comes from yourself but that you unconsciously don’t know it comes from a company which traced the path. This behavior has a significant impact on society because by manipulating our behavior and choices, companies are undermining our autonomy and privacy and eroding the foundations of a democratic society. That is why surveillance capitalism needs to be regulated in order to protect the privacy and autonomy of people and we should be enabled to hold companies accountable for their actions. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power by Shoshana Zuboff is available to purchase here [...]
March 14, 2023NASA scientists think it’s possible Since the first episode of the science fiction television series Star Trek was broadcast, viewers have had countless unanswered questions. Over the years, science fiction has always been entwined with actual science, and it has influenced technologies that people use every day. Warp drive was one of the many futuristic concepts that captured the interest of many individuals all around the globe among the many extraordinary occurrences shown in the series. It was the very first notion put forth in Star Trek, enabling it to span the galaxy faster than light. However, anything that defies Einstein’s Theory of Relativity is prohibited from traveling at the speed of light. According to this article, Mexican physicist Miguel Alcubierre first presented the Alcubierre drive as a potential propulsion technology in 1994. It is an idea for a spacecraft that might be able to exceed the speed of light without breaking any physical laws. The idea behind the Alcubierre drive is that instead of moving the spaceship itself through space, the drive would contract the fabric of space in front of the ship and expand the fabric of space behind it, essentially creating a “warp bubble”. The spaceship would then ride the wave of space created by the expanding and contracting fabric, effectively moving through space without actually moving through space. This would allow the spaceship to travel faster than the speed of light, while still obeying the laws of relativity. Although most people believed that this made perfect sense in theory, it was not feasible in practice. A properly constructed Alcubierre warp bubble. As space constricts in front of the vessel and expands behind, the ship is theoretically pushed forward at speeds faster than light. Image: LSI, White, et al. Joseph Agnew, a student at the University of Alabama, intended to test the idea in order to disprove their claims. Joseph says, “Mathematically if you fulfill all the energy requirements, they can’t prove that it doesn’t work”. “Suppose you have a craft that’s in the bubble”, he continues. “What you would do is, you’d compress space-time ahead of the craft and expand space-time behind it”. Objects tend to become heavier as they move more quickly. Moreover, acceleration becomes more challenging the heavier they become. To put it simply, it is utterly impossible to travel at the speed of light. Warp drive is supposedly the ultimate goal of space travel. It is claimed to be capable of having a propulsion system that can travel at speeds greater than the speed of light. The majority of science fiction authors have given us hope with several depictions of interstellar travel, yet moving at the speed of light is impossible. As explained by Einstein’s theory, we are all aware that nothing could move faster than light. The cause is that accelerating any object of any mass to the speed of light requires an endless amount of energy. Only the absence of mass in photons, the light’s constituent particles, can explain why light is unaffected. So, it is very impossible for a spaceship to travel at the speed of light. There are two gaps in this, though: Finding the probability of going at the speed of light simply means that we are discussing the propulsion of objects. There is no mention of the prohibition of approaching lightspeed. However, it may be feasible to exceed the speed limit for all objects by breaking the rules of physics. The “Alcubierre warp drive” notion was put forth at this point. It might be conceivable for the Alcubierre warp drive to go around the speed of light by warping space-time, just like in the television series “Star Trek,” rather than exceeding it. According to the hypothesis, the traveling spaceship is encircled by a ring of negative matter and is located inside the warp bubble. The ring of negative matter will aid in both the stretching and contracting of spacetime in front of and behind the spaceship. The spacecraft will be able to move at ten times the speed of light by doing this. Within the bubble, however, the spacecraft will continue to travel at the maximum speed permitted by general relativity. The warp drive would need a significant quantity of mass-energy to work. You would require a mass equal to that of Jupiter to drive the spaceship at such a level. Think about the Einstein equation E=mc2. You would require a tremendous quantity of energy, more than the cosmos will ever be able to supply. NASA mechanical engineer Dr. Harold Sonny White and physicists are still looking for solutions to the problem of the necessary mass-energy. He thinks it is probably possible to reduce the mass-energy requirement indicated in the Alcubierre hypothesis by bending the rules of physics. He said that it might be possible to slightly alter the negative mass ring’s design to accommodate a mass requirement of roughly 700 kg. The White-Juday Warp Field Interferometer is currently being built by a NASA team led by physicist White. It is a beam-splitting interferometer with an excellent ability to produce and detect the smallest warp bubble. Though noteworthy, there is still a long way to go before warp drive and interstellar travel are practical. But, thanks to technological breakthroughs, the solutions we seek might be within reach. Some TV series and movies have always tried to imagine the future and its technology. Sometimes they failed but other times they managed to suggest new ideas we saw in the next future. Therefore, it’s often hard to say if technology is evolving in the same way we envisioned or if we are the ones that are trying to replicate what we saw in a movie or TV series. [...]
March 7, 2023Biocomputers will be the next frontier to handle the massive required computational power Although computers driven by human brain cells may sound like science fiction, a team of US researchers believes such machines, which are part of a new field dubbed “organoid intelligence”, could impact the future. And they now have a strategy to get there. According to CNN, organ-like tissues created in the laboratory are called organoids. These three-dimensional models, which are typically created from stem cells, have been employed in laboratories for almost 20 years. This has allowed researchers to conduct studies on kidney, lung, and other organ-like models without hurting humans or animals. The pen-dot-sized cell cultures that make up brain organoids don’t actually resemble miniature replicas of the human brain, but they do include neurons that can do brain-like tasks and connect in a myriad of ways. In 2012, Dr. Thomas Hartung started developing brain organoids by modifying samples of human skin at the Johns Hopkins Bloomberg School of Public Health and Whiting School of Engineering in Baltimore. He and his colleagues hope to employ the potential of brain organoids to create biological technology that consumes less energy than supercomputers. These “biocomputers” would make use of networks of brain organoids to possibly transform drug testing for conditions like Alzheimer’s, give information about the human brain, and alter computing in the future. The research revealing Hartung and his team’s plan for organoid intelligence was released in the journal Frontiers in Science. “Computing and artificial intelligence have been driving the technology revolution but they are reaching a ceiling”, said Hartung, senior study author, in a statement. “Biocomputing is an enormous effort of compacting computational power and increasing its efficiency to push past our current technological limits”. Whereas human mental processes serve as a model for artificial intelligence, the technology cannot completely imitate the human brain. A supercomputer can process enormous volumes of data far more quickly than a human. “For example, AlphaGo (the AI that beat the world’s No. 1 Go player in 2017) was trained on data from 160,000 games”, Hartung said. “A person would have to play five hours a day for more than 175 years to experience these many games”. The human brain, on the other hand, uses energy more effectively and is better at learning and coming to complicated logical conclusions. It is easily capable of tasks that a machine cannot perform, such as being able to distinguish one animal from another. A $600 million supercomputer called Frontier weighs a heavy 8,000 pounds (3,629 kg), with each cabinet weighing the same as two regular pickup trucks. It is located at the Oak Ridge National Laboratory in Tennessee. The machine’s processing power surpassed that of a single human brain in June, but Hartung said it required a million times more energy. “The brain is still unmatched by modern computers”, Hartung said. “Brains also have an amazing capacity to store information, estimated at 2,500 (terabytes)”, he added. “We’re reaching the physical limits of silicon computers because we cannot pack more transistors into a tiny chip”. John B. Gurdon and Shinya Yamanaka, pioneers in the field of stem cells, were awarded the Nobel Prize in 2012 for their work on a method that made it possible to create cells from fully grown tissues, such as skin. With the help of ground-breaking research, researchers like Hartung were able to create brain organoids that mimicked living brains and test and detect medications that may be harmful to the health of the brain. Some scientists once questioned Hartung about whether brain organoids were capable of thought and consciousness. He thought about feeding organoids knowledge about their environment and how to interact with it in response to the question. “This opens up research on how the human brain works”, said Hartung, who is also the co-director of the Center for Alternatives to Animal Testing in Europe. “Because you can start manipulating the system, doing things you cannot ethically do with human brains”. Hartung describes organoid intelligence as “reproducing cognitive functions, such as learning and sensory processing, in a lab-grown human-brain model”. For OI or organoid intelligence, Hartung would need to scale up the brain organoids he currently uses. Each organoid contains roughly the same amount of cells as the nervous system of a fruit fly. A single organoid is equivalent to around 800 megabytes of memory storage because it is one-three millionth the size of the human brain. In order to share information with the organoids and get readouts of what they are “thinking”, the researchers also need a means of communication with them. The study’s authors have created a blueprint that combines new developments with technologies from bioengineering and machine learning. According to the study’s authors, more complex activities would be possible if organoid networks were to support various types of input and output. “We developed a brain-computer interface device that is a kind of an EEG (electroencephalogram) cap for organoids, which we presented in an article published last August”, Hartung said. “It is a flexible shell that is densely covered with tiny electrodes that can both pick up signals from the organoid, and transmit signals to it”. According to the researchers, human medicine may be where organoid intelligence makes its most significant contributions. Scientists could create brain organoids from skin samples of people with neural disorders, allowing them to study the effects of various drugs and other factors. “With OI, we could study the cognitive aspects of neurological conditions as well”, Hartung said. “For example, we could compare memory formation in organoids derived from healthy people and from Alzheimer’s patients, and try to repair relative deficits. We could also use OI to test whether certain substances, such as pesticides, cause memory or learning problems”. Moreover, brain organoids may provide a new perspective on how people think. “We want to compare brain organoids from typically developed donors versus brain organoids from donors with autism”, said study co-author and co-investigator Lena Smirnova, a Johns Hopkins assistant professor of environmental health and engineering, in a statement. “The tools we are developing towards biological computing are the same tools that will allow us to understand changes in neuronal networks specific for autism, without having to use animals or to access patients, so we can understand the underlying mechanisms of why patients have these cognition issues and impairments”, she said. But, there are already encouraging outcomes that show what is feasible. Video game Pong can be learned by brain cells, according to research co-author Dr. Brett Kagan, chief scientific officer at Cortical Laboratories in Melbourne, Australia, and his team. The use of brain organoids to generate organoid intelligence is currently in its very early stages. According to Hartung, it could take decades to develop an OI that has mouse-like cognitive abilities comparable to those of computers. “Their team is already testing this with brain organoids”, Hartung said. “And I would say that replicating this experiment with organoids already fulfills the basic definition of OI. From hereon, it’s just a matter of building the community, the tools, and the technologies to realize OI’s full potential”. The creation of human brain organoids that can perform cognitive tasks poses several ethical questions, such as whether the organoids may experience consciousness or pain and if the people whose cells were used to create them have any legal claim to the organoids. “A key part of our vision is to develop OI in an ethical and socially responsible manner”, Hartung said. “For this reason, we have partnered with ethicists from the very beginning to establish an ‘embedded ethics’ approach. All ethical issues will be continuously assessed by teams made up of scientists, ethicists, and the public, as the research evolves”. In a separately released policy viewpoint, Julian Kinderlerer, professor emeritus of intellectual property law at the University of Cape Town in South Africa, stressed the importance of including the general public in the understanding and advancement of organoid intelligence, although Kinderlerer was not part of the current OI study. “We are entering a new world, where the interface between humans and human constructs blurs distinctions”, Kinderlerer wrote. “Society cannot passively await new discoveries; it must be involved in identifying and resolving possible ethical dilemmas and assuring that any experimentation is within ethical boundaries yet to be determined”. We got used to computers consisting of hardware and software. Now we are going to have to get used to dealing with computers that also have a biological component. This merging of biology and electronics opens up even more complex considerations than the risks of technology alone such as AI. The ethical implications are not that easy to deal with. Can such neurons be considered truly alive? Will they be able to become sentient, but trapped inside a computer? The questions are many, and the answers are not simple. Nevertheless, it is wise to do all the necessary considerations before creating something from which there will be no turning back. [...]
February 28, 2023A.I. can be a weapon for dictatorial governments The next great gift from the free world to authoritarians is likely to be generative AI. The world’s dictators have become aware of the transformative power of generative AI to produce original, compelling material at scale as a result of the viral introduction of ChatGPT, a system with uncannily human-like capabilities for writing essays, poetry, and computer code. Generative AI refers to a class of artificial intelligence algorithms that can autonomously create content such as images, music, text, and videos that have not been previously inputted into the algorithm itself. Generative AI algorithms often use deep learning techniques, such as generative neural networks (GAN), to create content that appears authentic and indistinguishable from that created by humans. These algorithms can be used in many fields, such as art, music, fashion, design, video games, and even book writing. However, the heated debate that has developed among Western industry executives regarding the dangers of disseminating cutting-edge generative AI tools has largely overlooked autocracies whose consequences are most likely to be harmful. Up until now, the main worries about generative AI and autocrats have mainly been about how these systems could amplify propaganda. ChatGPT has previously shown how generative AI can automate the spread of false information. Generative AI heralds a change in the speed, scope, and legitimacy of dictatorial influence operations, especially when combined with improvements in targeted advertising and other new precision propaganda tactics. As the technology matures, this advantage will be increasingly important in giving open societies time to understand, detect and mitigate potential harms before autocratic states leverage the technologies for their own ends. But the free world risks squandering this advantage if these pioneering tools are easily acquired by authoritarians. Regrettably, it is difficult to keep sophisticated AI models out of the hands of autocrats. Technically speaking, generative AI models are very trivial to steal. Although models need a lot of resources to construct, they can be quickly replicated and modified at a low cost once they are created. However, several companies efforts to keep generative AI open source can be easily taken advantage of by AI researchers of autocratic states. Instead, companies should approach the development of generative AI with the care and security precautions required for a technology with a significant potential to feed dictatorship and abstain from revealing the technical details of their cutting-edge models to the public. To build on existing policies that restrict the export of surveillance technology, democratic governments should make clear the strategic significance of generative AI and impose immediate export restrictions on cutting-edge models of this kind to unreliable partners. Only reputable organizations with sound security procedures should be eligible for federal research funding for this type of AI. The alternative is a well-trodden route, in which tech companies support techno-authoritarianism by combining commercial incentives with naivete. Although the power of A.I. can be a weapon for dictatorial governments, we shouldn’t ignore that they can also turn democratic governments into dictatorial ones. Manipulation is everywhere on every level, from marketing to politics. Therefore, a government that declares itself as democratic doesn’t mean it is because sometimes, it’s better to not get caught. AI shouldn’t be a privilege of governments if we want a less authoritative world. Therefore, as AI evolves, we need other kinds of AIs that everybody could adopt to contrast the downsides of them. [...]
February 21, 2023From feelings to music to words As explained here, according to Duke University futurist Nita Farahany, the technology to decode our brainwaves already exists. And some companies are likely already testing the tech. That’s what she claimed during her recent “The Battle for Your Brain” presentation at World Economic Forum in Davos. “You may be surprised to learn it is a future that has already arrived”, Farahany said in her talk. “Artificial intelligence has enabled advances in decoding brain activity in ways we never before thought possible. What you think, what you feel, it’s all just data, data that in large patterns can be decoded using artificial intelligence”. The sensors can pick up EEG signals from wearables like hats, headbands, tattoos placed behind the ear, or earbuds and use AI-powered devices to decode everything from emotional states to concentration levels to basic shapes and even your pre-conscious reactions to numbers (i.e., to steal your bank card’s PIN without your knowledge). In one dystopian—yet very probable—scenario, an employer could use AI to keep tabs on a worker, check to make sure they’re wearing the correct equipment, and detect if they’re daydreaming or paying attention to a major or unrelated task. “When you combine brainwave activity with other forms of surveillance technology”, Farahany said, “the power becomes quite precise”. Electromyography signals can be picked up by additional technology built into a watch to track brain activity as it transmits signals down your arm to your hand. According to Farahany, by combining these technologies, we will be able to control our own electronics with our thoughts. She continued: “The coming future and I mean near-term future, these devices become the common way to interact with all other devices It is an exciting and promising future, but also a scary future. Surveillance of the human brain can be powerful, helpful, and useful, transform the workplace, and make our lives better. It also has a dystopian possibility of being used to exploit and bring to the surface our most secret self”. Farahany addressed the Davos meeting to push for a commitment to cognitive liberties, including topics like mental privacy and freedom of thought. When a person uses this technology to gain a better understanding of their own mental health or well-being, she claimed that it has the potential to be beneficial. even serve as a warning indicator for potential medical problems. Also, as more people monitor their brainwaves, the data sets grow, allowing companies to extract more information from the same data. But that has a flip side. “More and more of what is in the brain”, she said, “will become transparent”. Decoding the music you’re listening to Another study, which was published in the journal Scientific Reports, used a combination of two non-invasive techniques to track a person’s brain activity while listening to music: electroencephalogram (EEG), which records what is happening in the brain in real time, and functional magnetic resonance imaging (fMRI), which measures blood flow throughout the entire brain. The information was transformed to reconstruct and identify the piece of music using a deep-learning neural network model. As natural language and music both consist of complicated acoustic signals, it is possible to adapt the model to translate speech. This line of research eventually hopes to translate thought, which might be a significant help in the future for those who have trouble communicating, such as those with locked-in syndrome. Dr. Daly from Essex’s School of Computer Science and Electronic Engineering, who led the research, said: “One application is brain-computer interfacing (BCI), which provides a communication channel directly between the brain and a computer. Obviously, this is a long way off but eventually, we hope that if we can successfully decode language, we can use this to build communication aids, which is another important step towards the ultimate aim of BCI research and could, one day, provide a lifeline for people with severe communication disabilities”. The study involved the reuse of fMRI and EEG data obtained from participants listening to a set of 36 pieces of simple piano music that varied in tempo, pitch harmony, and rhythm. The music was played for 40 seconds at a time. The 36 pieces were selected as part of a previous project at the University of Reading. The model successfully identified the piece of music with a success rate of 71.8% using these combined data sets. Guessing words from your brain Researchers at Meta’s AI research division instead, decided to investigate whether they could decode complete sentences from someone’s neural activity without requiring dangerous brain surgery. The researchers described how they created an AI system that can predict what words someone is listening to based on brain activity captured using non-invasive brain-computer interfaces in a paper posted on the pre-print server arXiv. “It’s obviously extremely invasive to put an electrode inside someone’s brain”, Jean Remi King, a research scientist at Facebook Artificial Intelligence Research (FAIR) Lab, explained. “So we wanted to try using noninvasive recordings of brain activity. And the goal was to build an AI system that can decode brain responses to spoken stories”. The researchers made use of four pre-existing datasets of 169 individuals’ brain activity as they listened to spoken word recordings. Both magneto and electroencephalography (MEG and EEG), which use various types of sensors to detect the electrical activity of the brain from outside the skull, were used to record each volunteer. In their approach, the brain and audio data were divided into three-second segments and sent into a neural network, which then searched for patterns that could link the two. They tested the AI on new data after training it for many hours on this data. For one of the MEG datasets, the system performed the best. The correct word was present 72.5% of the time among the top 10 terms that had the highest likelihood of being connected to the brain wave segment. Although that may not sound impressive, keep in mind that it was chosen from a vocabulary of 793 words. In the other MEG dataset, the system achieved a score of 67.2 percent, but performed worse on the EEG datasets, with top-10 accuracies of just 31.4 and 19.1. Although it is obvious that this technology is still very far from being useful, it represents important advancement on a challenging issue. Deciphering brain activity in this way is difficult since non-invasive BCIs have significantly worse signal-to-noise ratios. However, if successful, this approach could lead to a technology that will be much more frequently used. But, not everyone is convinced that the issue can be resolved. The use of these non-invasive methods to listen to someone’s thoughts, according to Thomas Knopfel of Imperial College London, was like “trying to stream an HD movie over old-fashioned analog telephone modems” and he questioned whether they will ever be accurate enough for practical use. The study conducted by the Meta team is still in its very early phases, therefore there is still room for development. And anyone who can master non-invasive brain scanning will undoubtedly have plenty of incentive to try given the commercial prospects that await them. Our mind is the most important private place we have. If someone can know what we think, it means that we can’t be free anymore and that we might be completely controlled. Although reading what’s in someone’s mind could be useful for some diseases and can help people who can’t communicate to get their life better, if this technology was used to monitor our actions, it could lead to a more dystopic society than that of AI. [...]
February 16, 2023How we relate to others when we hide behind a screen Many people feel alone for a variety of reasons. Some have a hard time relating to others, some live in an isolated place, or they have specific interests that are hard to share with someone else. Anyway, especially for those people, technology seems to have helped to try to escape these difficult situations. Before social media evolved in the way we know today or before their existence, using the internet to establish relationships was quite hard, especially for non-experts. Now, social media are used by anyone of every age, and not everyone uses them because they feel alone, but for many, it’s the opportunity to be the center of attention. However, people on the internet not only use exclusively social media to communicate their feelings, but sometimes they prefer confessional sites where they can write more anonymously and feel freer to express more private things or secrets. On these sites, users are not supposed to necessarily answer, they can ignore you, or be kind but it may also happen that someone replies harshly. Nonetheless, we need to express our thoughts to the world, hoping for a good answer or comfort from a stranger because it looks easier and less embarrassing talking about secrets and private things with an unknown person, especially if we are anonymous, too. That’s the main difference with social media where we are behind a screen but friends and relatives know us and we generally use our name and photo. Once, confessions were a prerogative of family or friendships but they happen in a space where talking implies negotiation. Therefore, we can assume some conversations wouldn’t happen at all unless on the internet. Talking to kin or a friend may bring on disapproval although this is hard to take, it’s part of the relationship. It means someone cares about you or maybe doesn’t. It’s a risk but it may lead to a more solid relationship when we know someone comforts us, especially for a serious problem. However, even when we receive criticism we can learn something. Sometimes we learn we are the ones who did something wrong, or that the person we are talking to is not that reliable. But in online conversation, all this complexity doesn’t happen. Now, we are entering an era where chatbots are evolving rapidly, and having a conversation with them may look like having one with a stranger on the internet. And while some read confessional sites simply for curiosity, others take comfort in learning that some have the same troubles that they do. However, sharing problems with a bot is perceived as less shaming than with a stranger who is still a person that could react harshly. So, while it’s easier to confess, it’s also easier to be aggressive. People can feel satisfied for getting such feeling out but they can be more vulnerable in new ways. Although they hope to be repaid in intimacy for such a secret confession. But cruelty can find a free rein because those who attack, easily detach words from the person, and their frustration finds a fertile land. In this scenario, we are minds without bodies who feel free to say anything, without a moral, positively and negatively. Of course, extreme reactions may also happen between people in real life but on the internet, there’s no limit and people hope that talking online may help them not to need to talk to someone in person. However, when we know we’re having a conversation with a bot, we do a step further because we know it can’t answer cruelly, therefore we can feel completely free to say anything, even the most disagreeable. Nonetheless, even the most accurate chatbot can only answer our problems only rationally and not emotionally. We can’t establish a real relationship with it but only find good answers which it’s good but it’s another thing. Trying to find relief just with this solution leads us to give up our emotional resources to build relationships with others because we feel satisfied with a digital surrogate which should be a supplement rather than the main solution. This behavior may keep us from taking positive actions because we already feel we’ve done “something” while we need more not to feel detached from others. Alone Together – Why we expect more from technology and less from each other by Sherry Turkle is available to purchase here [...]
February 14, 2023A neuroscientist point out the importance of a more ‘conscious’ AI Chatbots are evolving quickly and now, thanks to AI they can communicate with people with natural language like happens with real people. However, although these conversations may look like they were started by humans, they lack feelings. In this regard, a Princeton neuroscientist warned that artificial intelligence-powered chatbots may look sociopaths if they keep being emotionless. According to the definition, a sociopath is a person with antisocial personality disorder, a mental illness defined by a recurring habit of neglecting the rights and feelings of others. People who suffer from antisocial personality disorder frequently act in a manipulative or dishonest manner and may also have criminal or violent impulses. This, applied to AIs, means that their excessive rationality can lead them to make choices that aim more at the intent of their creators or who controls them than act honestly. For example, an AI made to sell products could rationally manipulate the person it is talking to in order to make them buy that product regardless of ethics. According to a recent essay by Princeton neuroscientist Michael Graziano, which was covered by The Wall Street Journal, these chatbots could be a real threat to people unless developers incorporate more sensibility. The risks associated with AI may not be as prominent right now, but as these advanced technologies are improved and developed, they may evolve in the future. Graziano suggests an integration of human attributes like empathy and prosocial conduct in order to make them more like humans. Notably, the neurologist contends that for these systems to comprehend these features and modify their behavior to be more in line with human ideals, they will require some type of incorporated consciousness. However, “consciousness” is not something that could belong to machines. It’s like talking about machines and souls. They are opposite fields. It’s exceedingly challenging to quantify awareness, and philosophically speaking, it’s even challenging to determine whether some individuals or robots are even somewhat conscious. A “reverse Turing test” which gauges a machine’s capacity to display intelligent behavior comparable to or indistinguishable from that of a person, is Graziano’s suggestion for how an AI should be evaluated. A computer should be tested to see if it can distinguish between talking to a human and another computer rather than human testing the machine to see if it acts as a human would. Empathy, however, can be even achieved rationally. In fact, the ability to understand and share the feelings of others can be both affective and cognitive. Affective empathy: Also known as emotional empathy, involves feeling the emotions that others are experiencing; Cognitive empathy: involves the ability to understand and perspective-take on the thoughts and feelings of others, without necessarily feeling the emotions yourself. Therefore, even just cognitive empathy could help improve AIs being less rational and attentive to the needs and feelings of people. According to Graziano, if these issues aren’t resolved, people will have created powerful sociopathic machines that are capable of making important judgments. According to him, systems like ChatGPT and other language models are currently only at the beginning. That could, however, change in a year or five if research into machine ‘awareness’ continues and development advances. “A sociopathic machine that can make consequential decisions would be powerfully dangerous. For now, chatbots are still limited in their abilities; they’re essentially toys. But if we don’t think more deeply about machine consciousness, in a year or five years we may face a crisis”, said Graziano. AIs could be trained to understand the different emotional consequences a person might feel based on the direction of their behavior so that they can figure out how to act more ethically. However, an overly ‘aware’ AI could also have unpredictable implications, and the concept of ethics could still change over time and/or be distorted by rational deception. The movie “I, Robot” is an example of this. [...]
February 7, 2023AI accelerates this process The concept of “singularity”, which takes terminology from black hole physics, is prevalent in the field of artificial intelligence. This idea depicts the point at which AI becomes uncontrollable and quickly changes society. AI singularity it’s very difficult to forecast where it starts and practically impossible to know what lies beyond this technological “event horizon” where almost anything will be technologically possible. According to this article, some AI researchers, on the other hand, are looking for indicators of singularity, measured by how AI development is moving closer to human-like capabilities. According to Translated, a translation agency, one such metric is an AI’s ability to accurately translate speech. Since one of the most challenging AI problems is language, a machine that could overcome this obstacle would theoretically display evidence of artificial general intelligence (AGI) namely the ability of an intelligent agent to understand or learn any intellectual task that a human being can. “That’s because language is the most natural thing for humans”, Translated CEO Marco Trombetti said at a conference in Orlando, Florida, in December. “Nonetheless, the data Translated collected shows that machines are not that far from closing the gap”. A statistic known as “Time to Edit”, or TTE, was used by the company to monitor the performance of its AI from 2014 through 2022. It estimates how much longer it takes skilled human editors to correct AI-generated translations than human personnel. Over those 8 years and after examining over 2 billion post-edits, Translated’s AI demonstrated a moderate but noticeable improvement as it gradually narrowed the quality difference toward human levels. According to Translated, it takes about one second on average for a human translator to edit each word of another human translation. A machine-translated (MT) suggestion was checked by professional editors in about 3.5 seconds per word in 2015; today, that time is lowered to about 2 seconds. By the end of the decade, if the current pattern holds, Translated’s AI will be just as accurate as a translation made by a person (or even sooner). “The change is so small that every single day you don’t perceive it, but when you see progress… across 10 years, that is impressive”, Trombetti said on a podcast in December. “This is the first time that someone in the field of artificial intelligence did a prediction of the speed to singularity”. Of course, this is a creative way of measuring how close mankind is to the singularity, although this concept of the singularity has the same issues as a more general definition of AGI. Though mastering human speech is undoubtedly a frontier in AI research, a computer need not be intelligent just because it possesses this great skill (especially considering how many experts disagree even on what “intelligence” is). Anyway, although it is not easy to predict when will this singularity occurs. What is certain, however, is the impact AI is now having on people who interact with it, as the conversational ability of AI such as ChatGPT far exceeds the ability to translate, and it has become the ultimate source of information for any subject. So the approach to information has already changed since it is almost like having an expert person on every subject at your fingertips, without the need to look elsewhere. [...]
January 31, 2023It’s the result of a new non-standard approach A novel experimental approach has been used to create hundreds of brand-new nanoparticles, intricate materials with never-before-seen properties. Smaller than 100 nanometers, or approximately the size of a virus, nanoparticles are complex materials with a wide range of potential uses, including medicine, energy, and electronics. To create materials, chemists typically identify the ideal circumstances for a particular product. By deliberately employing unoptimized conditions to create numerous new materials, a research team at Penn State skipped this strategy though. According to this article, researchers begin with straightforward copper sulfide rod-shaped nanoparticles (top left). Then, they use a procedure known as “cation exchange” to swap out some or all of the copper in the particles for other metals. The researchers created and analyzed hundreds of nanoparticles that combine numerous distinct components in diverse combinations in trials that were purposefully not optimized (top), many of which could not have been purposefully manufactured using current design standards. They then rationally produced one of the nanoparticles in high yield using new parameters derived from the initial set of experiments (bottom). They were able to find unique nanoparticles using this technique, which incorporates a variety of elements in different arrangements. They later conducted further research on these nanoparticles to produce new guidelines that allowed them to create high-yield samples of the most interesting nanoparticle varieties. It is possible to forecast and construct nanoparticles that could be used to split water using sunlight, identify and treat cancer, and address other significant issues. To work, these particles may need to contain different kinds of semiconductors, catalysts, magnets, and other components, all while adhering to rigid specifications for their size and shape. “There are a certain number of rules that we and others have developed in this field that allow us to make a lot of different kinds of nanoparticles”, said Raymond Schaak, DuPont Professor of Materials Chemistry at Penn State and the leader of the research team. “We can also predict, especially with the help of computers, tens of thousands of different nanoparticles that could be really interesting to study, but we have no clue how to make most of them. We need new rules that allow us to make nanoparticles with new properties, new functions, or new applications, and that allows us to better match the speed at which they can be predicted”. The researchers set up experiments under unoptimized and previously unexplored conditions to see if they could make new types of particles that hadn’t previously been discovered because the current set of rules or design guidelines, available to researchers, limits the variety of nanoparticles that they can produce. “What we do can be described as ‘discovery without a target'”, said Connor R. McCormick, the paper’s first author and a graduate student in chemistry at Penn State. “If you have a target in mind, you are trying to modulate the chemistry to make that target, but you need to know what factors to modulate, you need to know the rules, ahead of time. What is so exciting about our approach is that we are letting chemistry guide us and show us what is possible. We can then characterize the products and discover what we can control in order to produce them intentionally”. The properties of the particles are governed by how the metals are arranged within them and at their interfaces. Typically, this procedure is carried out one metal at a time under experimental circumstances that are specifically tuned to regulate the cation exchange reaction. In this experiment, four separate metal cations were simultaneously supplied under non-optimized circumstances for any given metal cation exchange. After that, they painstakingly used X-ray diffraction and electron microscopy to characterize the resultant particles. “Unlike most experiments, which are set up to converge on a single product, our goal was to set up the experiment in a way that maximized the diversity of nanoparticles that we produced”, said McCormick. “Of the 201 particles that we analyzed from one experiment, 102 were unique and many of them could not have been produced intentionally using existing design guidelines”. The researchers next conducted experiments with slightly modified variables, altering the reaction’s temperature or the proportionate quantity and variety of metal cations. As a result, they were able to create even more sophisticated nanoparticles and eventually deduce new rules for the new forms of nanoparticles. Finally, the team decided on one of the new materials and successfully produced it in greater quantities using the new design principles. “Eventually, this approach could be used to screen for new particles with specific properties, but currently we are focusing on learning as much as we can about what all is possible to make”, said Schaak. “We’ve demonstrated that this exploratory approach can indeed help us to identify these ‘new rules’ and then use them to rationally produce new complex nanoparticles in high yield”. Nanoparticles can contribute to a variety of uses, such as drug delivery and targeted therapy; medical imaging and diagnostics; water treatment and purification; energy production and storage (e.g. batteries, solar cells); cosmetics and personal care products; catalysis and chemical reactions; sensors and electronic devices; food packaging and preservation; and environmental remediation. However, there are also potential risks, such as toxicity and potential harm to human health and the environment; limited biodegradability and potential accumulation in ecosystems, and difficulty in controlling the size and distribution of nanoparticles. [...]