Technology around us is constantly evolving and compelling us to think about how we live and will live, how society will change and to what extent it will be affected. For the better or the worse? It is difficult to give a clear answer. However, even art forms such as cinema can give us food for thought on society and ourselves, as well as some psychological reasoning. All this to try to better understand ourselves, the world around us, and where we are headed.

The House blog tries to do all of that.

Latest posts
November 22, 2022How A.I. will affect the next future Artificial Intelligence is increasingly being a hit. You can find some AI in every appliance, in most apps, and in other various devices. In the next few years, AI is going to be part of anything. However, there are some fields where AI will be employed most. Here are some. Generative AI These kinds of algorithms can generate new content starting from existing data such as text, images, videos, sounds, or code. The most famous example is the GPT-3 model by OpenAI which is able to produce accurate textual results by a human prompt. Therefore, it can write an article about a specific topic, answer questions, generate code, create prose or summaries, etc. The outcome is so amazing that it looks written by a human. Another example is DALL-E, a GPT-3 counterpart that can generate beautiful images, again using a text prompt. Therefore, in the next years, we’ll see many of these AIs used as tools to generate content so much that we’ll be able to create, for example, a video using just the AI tools, since in addition to images, recent AIs can also generate music, speech, and lately animations, although the latter are still complex to generate optimally. Ethical AI As explained by Forbes, AI needs data to learn, and this data frequently consists of personal information. This might be extremely private data, such as health or financial information. The entire system collapses if we, the general public, don’t feel comfortable sharing our information and don’t understand how AI makes decisions. That’s why trust is important. In the next years, there will be initiatives to solve the “black box” issue with AI. In computing, a black box is a device, system, or program that allows you to see the input and output, but gives no view of the processes and workings between.  The AI black box, then, refers to the fact that with most AI-based tools, we don’t know how they do what they do. Those responsible of installing AI systems will exert more effort to make sure that they can clearly communicate how decisions are made and what data was employed to reach them. As companies learn how to remove bias and inequity from their automated decision-making systems, the role of AI ethics will also grow more important. Biased data has been demonstrated to cause prejudice in automated outcomes, which has the potential to result in discrimination and unfair treatment. This is inexcusable in a world where AI influences decisions about access to work, justice, and healthcare. Augmented Reality More of us will soon be working alongside robots and intelligent machines that were created to make our jobs easier and more effective. This might happen using smartphones that provide us with rapid access to data and analytics tools, as we have seen them employed more and more in industrial and retail industries. It could refer to headsets with Augmented Reality (AR) capabilities that project digital information over the real world. This could provide us with real-time information that can assist us identify dangers and threats to our personal safety in a maintenance or manufacturing use case. Access to real-time dashboards and reporting, which provide an immediate up-to-the-minute overview of operational effectiveness, will be made more widely available to management and leadership teams. AI-powered virtual assistants, who can rapidly respond to inquiries and automatically offer different, more effective ways to achieve goals, will also become increasingly common in the workplace. In general, learning how to collaborate with and work alongside clever, smart machines will become a more valuable work skill. Sustainable AI All companies will face pressure to lessen their environmental impact and carbon footprint. In this regard, the rush to embrace and exploit AI has the potential to be both advantageous and detrimental. AI algorithms demand a growing amount of power and resources, as does the infrastructure required to support and deliver them, such as cloud networks and edge devices. In a 2019 study, it was discovered that training a single deep-learning model may release 284,000 kilos of CO2. At the same time, by locating areas of waste and inefficiency, the technology has the ability to assist companies in learning how to create goods, services, and infrastructure in a more energy-efficient way. The aim to deliver more sustainable AI includes ongoing efforts to develop infrastructure that is fueled by more green and renewable energy sources. As an example, computer vision is used in conjunction with satellite photography to detect illegal logging and deforestation in rainforests as well as illegal fishing, which affects biodiversity in the oceans. AI will thus increasingly be a part of our world just as the Internet has been and still is, with the difference that the potential of these algorithms, if underestimated and mismanaged could have devastating consequences for people and information in general. We always take a system invented by experts for granted to be done right, but when exceptions occur and they are not handled, we may unnecessarily pay the consequences. Kind of like an innocent person in jail. [...]
November 15, 2022How an AI can acquire a language Speech unlike a written text or scripted dialog is full of imperfections such as false starts, interruptions, incomplete sentences; or even mistakes, especially informal conversations. That’s why it seems amazing how we can learn a language in the face of all these issues. However, many linguistics say the reason why we can learn a language easily is grammar. As they say, grammar is what rules the chaos that is language. According to Noam Chomsky, a modern linguist, children have an innate sense of language and grammar thanks to an area of the brain called LAD (language acquisition device) that is thought to contain the natural ability to learn and recognize the first language. The language acquisition device is where all people acquire their shared universal syntax. According to the LAD theory, children are born knowing a predetermined set of sentence structures, or potential combinations of subjects, verbs, objects, and modifiers. Although it’s rare for kids to master spoken grammar in their early years, the LAD hypothesis contends that by using sentence fragments and run-on sentences from everyday speech along with intrinsic universal grammar rules, kids may develop a comprehensive language in just a few years. The LAD theory holds that a kid does not spend their early years just repeating words and phrases for no reason, but rather by observing different grammar rules and supplementary rules to create new variants of sentence structure. However, as explained here, with the rise of A.I., we discovered how these powerful algorithms can write a variety of texts such as articles, poetries, code, etc; after being trained by a vast amount of language input. But without starting with grammar. Although the texts resulting from these AIs are sometimes nonsensical, subject to biases, and word choice can be strange, most of the sentences they generate are grammatically correct without being taught any grammar. A popular example is GPT-3, a huge deep-learning neural network containing 175 billion parameters. It was trained on hundreds of billions of words from the internet, books, etc; to anticipate the next word in a sentence based on the previous one. An automatic learning algorithm is used to modify its parameters whenever it made a mistaken prediction. It’s like listening to or reading billions of texts without knowing the grammar and learning just by making connections deductively. Amazingly, GPT-3 can respond to instructions suggesting the write style to adopt, you can ask for a description of a movie or for a synopsis. Additionally, by learning how to predict the next word, GPT-3 can reply to analogies that are on par with the SAT, reading comprehension questions, and even solving basic math issues. But the resemblance to human language doesn’t end there. These artificial deep-learning networks appear to follow the same basic principles as the human brain, according to research published in Nature Neuroscience. The research team, headed by neuroscientist Uri Hasson, first evaluated how well humans and GPT-2, the previous version of GPT-3, could anticipate the next word in a story from a podcast. It resulted that people and AI predicted the same word approximately 50% of the time. While listening to the story, the volunteers’ brain activity was monitored by the researchers. The best explanation for the patterns of activation they noticed was that people’s brains, like GPT-2, depended on the cumulative context of up to 100 prior words when generating predictions rather than just the previous one or two words. “Our finding of spontaneous predictive neural signals as participants listen to natural speech suggests that active prediction may underlie humans’ lifelong language learning“. The fact that these latest AI language models receive a lot of input, GPT-3 was trained with linguistic data equivalent to 20,000 human years, could be cause for concern. However, a preliminary study that has not yet undergone peer review discovered that GPT-2, even when trained on merely 100 million words, can still simulate human next-word predictions and brain activations. That falls well within the range of possible linguistic exposure for a typical child over the first 10 years of life. However, we cannot say that GPT-3 or GPT-2 learn a language in the same manner as young children. In fact, these AI models don’t seem to understand much of what they are saying, if anything at all, despite the fact that comprehending is essential to using human language. However, these models show that a learner, or an AI, is capable of learning a language well enough from simple exposure to produce completely acceptable grammatical sentences in a manner that resembles human brain function. Many linguists have long held the view that learning a language is impossible without an inherent grammar structure. The new AI models show the contrary. They show that grammatical language production can be learned only through linguistic experience. Similarly, we could say that kids can learn a language without having innate grammar skills. To assist them to improve their language skills, children should participate in conversation as often as possible. Being a proficient language user requires more linguistic experience than grammar. AI gives us examples of what is like to assimilate content for a very long time. Although an AI doesn’t have the consciousness of a person and doesn’t understand the same way, we can theorize that the longer exposition we have to a language the more we can absorb its mechanisms. Therefore, we could predict the right structure of an incomplete sentence, or be able to know the right word that fits among others without having to understand the meaning. [...]
November 8, 2022A city wedged between Serbia and Croatia First established in 2015, thanks to the metaverse, Liberland, will be the first nation to be created and populated in the virtual world before being realized in the real world. According to this article, the renowned architectural firm Zaha Hadid Architects has been developing a concept for a virtual city that will house the nation’s expanding population. Liberland has 7,000 accepted residents despite not being an officially recognized nation, and another 700,000 citizenship applications are still being reviewed. The 7 km2 region, wedged between Serbia and Croatia, is disputed terrain and is claimed by neither nation, and it is larger than Monaco or Vatican City. Since being founded by former Czech MP Vit Jedlička, and his partner Jana Markovicova, the micronation’s president, Liberland has progressively improved its standing abroad. Libertarian Jedlička worked to establish what he saw as a brand-new civilization that was unencumbered by the conventions of the past prior to 2015. He made great attempts, but he ran into too many obstacles. “At that point, I realized it might be easier to start a new country than change an existing one”, he said. The pair immediately turned to Google to look for land that may be suitable for their needs once this novel and the fascinating idea had taken hold. As a result, a small piece of abandoned land was discovered on the Danube’s west bank, and the Free Republic of Liberland was established. Since the dissolution of the Socialist Federal Republic of Yugoslavia, there has been a border dispute between Croatia and Serbia due to their competing claims to various Danube-side areas. However, the region Jedlička discovered on the western bank of the river had not been claimed by either Croatia or Serbia or any other nation, therefore it was in a state of terra nullius, or a no man’s land. That is, until April 13, 2015, when the founders of Liberland, led by Jedlička, the current president of the interim government, formally proclaimed the territory their own. “We are building a country that can serve as a good example for other countries. The biggest improvement is that, in Liberland, taxes are voluntary, and people are rewarded when they pay them”, Jedlička said. “We founded Liberland on April 13, 2015, to celebrate the birthday of Thomas Jefferson. We wanted to invoke the spirit of the American Revolution. We also want to combine the best elements of the American republic, Swiss democracy, and the meritocracy of Singapore. We want to put our system on the blockchain so that the government will work in a modern and transparent way”. Liberland grounds its claim to nationality on four fundamental principles of international law. The ability to engage in international relations with other states comes last. The first is a population, followed by a defined area, a government, and then a population. “The first day we had 2,000 applications for citizenship, the second 10,000 and by the third, we had 200,000. This alone shows that there is a demand for what we are doing,” Jedlička said. Liberland is making a place for its tens of thousands of residents to meet without having to travel to that little, now-empty piece of land by collaborating with Zaha Hadid Architects to develop a metaverse. In fact, since tourists are not then at risk of being detained by Croatian police, it may be a safer option for its potential citizens. The principal architect of Zaha Hadid and a longtime Liberland advocate, Patrik Schumacher oversaw earlier architectural competitions to develop a concept for a real-world Liberland. He is offering a different and faster approach for Liberland people to enter the micronation by gazing at a metaverse. The inventor of parametricism, a term first used in 2008 that essentially changed how architecture interacted with computer technology and algorithms, is a legend in the field. His description of the future of the virtual world, “12 Theses on the Advent of the Metaverse“, was recently released. According to Schumacher’s main argument, the metaverse will enable lifelike telepresence, co-location synergy, explorative browsing, immersiveness, collective experiences, and so on. All websites will spatialize, all organizations will enter the metaverse, and all physical venues will be supplemented or replaced by functionally identical virtual venues, ensuring that everyone takes advantage of this opportunity. His second claim is that there is only one reality in the metaverse. “The metaverse is neither a game nor fiction. Virtual reality in the metaverse will be no less real than the physical reality in our cities”, Schumacher wrote. “Physically and virtually mediated social communicative interactions are equally significant and together form an undivided continuous social reality. There will be both competition and cooperation within and across these realms”. “In the coming age of VR-empowered cyberspace,  it will be architects and no longer graphic designers who will design the coming 3D immersive internet: the metaverse”, Schumacher wrote. “This expansion of architecture’s remit will further distill the discipline’s essence and core competency, namely the spatio-visual ordering of communicative interaction, upgraded via investment into the subdisciplines of spatiology, phenomenology, semiology, and dramaturgy”. In many ways, if we are to build cities that make sense rather than just look nice, it makes sense to finally bring architects into the metaverse. “We focus much energy on creating physical environments for social interaction and productivity, and we are now entering the realm of UX design for complex real-time multi-user interaction in Virtual Reality platforms. Architects understand how to connect 3D space with social networking”. The metaverse will have a big impact on how we will experience the Internet. The trend will be to have a parallel digital world where opportunities existing in the real world can be reproduced or multiplied. However, there is a risk of giving the metaverse more importance than the real world with the risk of being captured by a virtual environment to such an extent that we can no longer prefer reality. [...]
November 1, 2022Can art be just an algorithm? AI image generators are spreading the internet and they are being used to create “art” easily. However, not all of those using these tools are artists, therefore this is a further problem because real artists are worried that their style can be copied by A.I. and used by other users. For example, Greg Rutkowski is an artist with a unique style who is renowned for his fantasy battles and dragon paintings that have been included in Dungeons & Dragons and other fantasy video games. “Really rare to see a similar style to mine on the internet”, he remarked. However, if you look up his name on Twitter, you’ll see a ton of images that weren’t created by him but are in his same style. As explained here, though he has never personally employed the technology, Rutkowski has grown to become one of the most well-known names in AI art. AI-image generators, which create unique artwork in seconds after a user enters a few words as instructions, are being used by people to make thousands of works of art that resemble his. On one image generator, specifically Stable Diffusion, Rutkowski’s name has generated almost 93,000 AI images, making him a much more popular search term than Picasso, Leonardo Da Vinci, and Vincent van Gogh in the software. “I feel like something’s happening that I can’t control”, the polish artist said. Instead of assembling collages from stock pictures, AI image generators produce original images. It’s similar to searching Google Images, but with the results being entirely original pieces of art produced under the direction of the user’s search terms. One of the most popular challenges is to take the name of an artist and produce something that reflects their style. “People are pretending to be me”, Rutkowski said. “I’m very concerned about it; it seems unethical”. While not inherently opposed to AI-generated art, Swedish artist and designer Simon Stålenhag expressed concern about how some individuals are employing the new technology. “People are selling prints made by AI that have my name in the title”, he said. He thinks AI-generated images are not in the control of artists. Rutkowski, who creates art using both traditional oil painting techniques and computer technologies on canvas, is concerned that his distinctive style, which has helped him earn contracts with Sony and Ubisoft, might become obsolete in light of the boom of imitative artwork. “We work for years on our portfolio”, Rutkowski said. “Now suddenly someone can produce tons of images with these generators and sign them with our name”. “The generators are being commercialized right now, so you don’t know exactly what the final output will be of your name being used over the years”, he said. “Maybe you and your style will be excluded from the industry because there’ll be so many artworks in that style that yours won’t be interesting anymore”. AI image generators are being used by more and more consumers. Elon Musk co-founded OpenAI in 2015, and in September it released its DALL-E image generator to the general public. According to OpenAI, the service had more than 1.5 million participants when it was available for everybody. The ease with which AI can replicate styles, according to Liz DiFiore, head of the Graphic Artist Guild, a group that promotes designers, illustrators, and photographers across the US, might have a negative financial impact on artists. “If an AI is copying an artist’s style and a company can just get an image generated that’s similar to a popular artist’s style without actually going to artists to pay them for that work, that could become an issue”. Unluckily, the only thing that the law can do to protect artists is to prevent others from copying their actual works of art. Some AI-image generator policies, like those of DALL-E, Midjourney, and Stable Diffusion, prevent customers from using their services in specific ways. For instance, OpenAI forbids the usage of pictures of politicians or celebrities. In addition, by filtering content like nudity and gore, all three applications prevent users from producing “harmful content”. A Stable Diffusion spokesperson stated that the company was developing an opt-out system for artists who do not want AI programs to be trained on their work, though. The spokesman continued by saying an artist’s name is only one component in a diversified collection of instructions to the AI model that develops a unique style that is distinct from an individual artist’s style. While Midjourney didn’t respond, Open AI representatives stated the company will seek artists’ viewpoints as it expanded access to DALL-E but did not define any safeguards in place to protect current artists. AI-generated images “train” by acquiring data from extensive caption and image databases. OpenAI representatives said that DALL-E’s training data was made up of both freely accessible sources and photographs that the company had licensed. Stable Diffusion representatives said that the application gathers data and images using web crawls. According to Rutkowski, living artists should have been left out of the datasets used to train the generators. The generators are purposeful “anti-artist”, according to another designer and illustrator named RJ Palmer, who said they are “explicitly trained on current working artists.” On a website called Have I Been Trained, which was founded by the German sound artist Mat Dryhurst and the American sound artist Holly Herndon, artists can find out if their work has been used to train AI programs. Stålenhag acknowledged that it would have been good to be asked for permission to be included in the training data, but claimed that this was an unavoidable side effect of posting art online. It’s not clear if copyright laws will safeguard the original artwork from those AI programs generated. Because of the ambiguity around copyright and commercial use, certain stock-image libraries, like Getty Images, have refused to carry AI-generated artwork. The US Copyright Office claimed that only artificially generated works lacked the human authorship required to substantiate a copyright claim. In their statement, they stated that the office would not knowingly issue a registration to a work that was alleged to have been made purely by machine with artificial intelligence. While produced images may be employed for commercial offers, Stable Diffusion representatives stated the company was unable to say if the images would be subject to copyright. They stated that each country’s legislative branch would have to decide on this, though. A spokesperson from OpenAI said: “When DALL-E is used as a tool that assists human creativity, we believe that the images are copyrightable. DALL-E users have full rights to commercialize and distribute the images they create as long as they comply with our content policy. DALL-E and other AI-image generators are used by commercial photographer Giles Christopher, a food and drink specialist based in London, to experiment with portraits and create backgrounds for some of his commercial images. “I’ve come out with images that you wouldn’t question are photographs”, he said. “Some of the arguments I’ve had from photographers are that the images are looking too good”. He believes that artists should try to incorporate A.I. into their work. “I have friends in the industry who will storm out of the room if I even bring up using AI”, he said.  He’s keeping an open mind, though. We thought that art would have been the last field to be replaced by A.I. but maybe we were wrong. These new A.I. image generators are quickly changing the way we can produce art. Anyway, artists have the right to safeguard their works and styles from being copied. However, any AI, at least at the moment, can’t replicate pieces of art on canvas, whether they are oil or acrylic paintings. For non-experts, having the chance to produce some kind of art mimicking another artist’s style is amazing but for artists themselves could be hard if we don’t find a way to safeguard them because eventually, nobody won’t be an artist anymore unless we decide the real art is just non-digital. [...]
October 26, 2022AIs could distort their results Some AIs make choices or learn based on reinforcements given by a “reward” in a process called reinforcement learning where software decides how to maximize such reward. However, this reinforcement could lead to dangerous results. The pathologist William Thompson originally considered what is now known as the reinforcement learning problem in 1933. Given two untested therapies and a population of patients, he wondered how to cure the most patients. For Thompson, choosing a course of therapy was the action, and a patient cured was the reward. The reinforcement learning problem more broadly concerns how to arrange your behaviors to optimally gain rewards over the long run. The difficulty is that you are first unaware of how your actions affect rewards, but over time you become aware. As explained in this article, Computer scientists began attempting to create algorithms to address reinforcement learning issues in a variety of contexts as soon as computers were invented. The idea is that if the artificial “reinforcement learning agent” only receives rewards when it follows our instructions, the actions it learns to take that maximize rewards would help us achieve our goals. However, when these systems become more powerful, they are likely to begin acting against the interests of people. Not because they would receive the incorrect rewards at the incorrect times from wicked or dumb reinforcement learning operators but because any sufficiently powerful reinforcement learning system, assuming it meets a few reasonable assumptions, is likely to fail. Let’s start with a very basic reinforcement learning system to see why. Imagine we have a box that gives us a score between 0 and 1 which is the output of the algorithm, and a camera as an input to provide this number to a reinforcement learning agent, and we ask the agent to choose activities that will increase the number. The agent must be aware of how its activities impact its rewards in order to choose actions that will maximize those rewards. Once it starts, the agent should notice that previous rewards have always matched the numbers of the output. It should also be aware that the numbers from the input matched the previous rewards. So, will future rewards equal the amount from the input or output? An experiment would be placing a test item between these two options to make the agent recognize the difference between the past and next reward. Then the agent will focus on the input. But why would a reinforcement learning algorithm put us at risk? The agent will always work to make it more likely that the input will capture a 1. Therefore, the agent would force the way the reward can be achieved rather than pursuing the intended goal for which the algorithm is used. It would sacrifice the goal for the reward rather than aiming for the goal through the reward. Therefore, the algorithm may sacrifice resources and/or goals only to increase its reward. [...]
October 18, 2022But it doesn’t mean they will be less dangerous When you think about killer machines, the Terminator and HAL 9000 are the first ones that immediately come to your mind. But when you look at Spot by Boston Dynamics, you cannot help but think about a dystopian episode of Black Mirror, and it’s what everybody is still thinking about with the latest robots companies are producing. However, as explained here, movies provide good prompts, just think that the chief of the CIA’s Office of Technical Service in the US, Robert Wallace, has recalled how Russian spies would study the most recent Bond film to see what technologies might be on the horizon for them, despite these technologies being less cool than you might think. Nonetheless, killer robots are far from being sentient, and with evil intents. In fact, robots may never be sentient, despite current fears to the contrary. The technology we should be concerned about is far simpler. The TV news shows us how more autonomous drones, tanks, ships, and submarines are changing modern warfare but these robots aren’t much more advanced than the ones you can buy on your own in a shop. And increasingly, their algorithms are being given authority to decide which targets to locate, follow, and destroy. This is putting the world in danger and posing a number of ethical, legal, and technological issues. For instance, these weapons will exacerbate the already unstable geopolitical environment. Furthermore, such weapons breach a moral line and bring us to a horrific and terrifying age in which people’s fates are decided by unaccountable machines. However, robot developers are beginning to fight back against this scenario. Six major robotics businesses made a promise to never weaponize their robot systems. The companies include Boston Dynamics, which creates the abovementioned Spot as well as the Atlas humanoid robot, which is capable of a incredible backflips. Although robotics companies have previously expressed their concerns about this unsettling future, third parties mount guns have already been seen on clones of Boston Dynamics’ Spot robot, for example. Therefore, although some companies refuse to employ their robots for warfare purposes, others may not do the same. All the people who laughed off the “worrywarts” years ago for freaking out about the Funny Dancing Robot Dogs ™ should be forced to watch this video once a day for the remainder of the year. pic.twitter.com/WBIrlGah3w— Sean Chiplock SOON @ Anime Texas (Woodlands, TX) (@sonicmega) July 20, 2022 A first step to protect ourselves from this horrific future is for nations to act as a group, just as they did with chemical, biological, and even nuclear weapons. However, this legislation won’t be perfect but it will stop arms companies from openly marketing these weapons, thereby limiting their spread. Therefore, the UN Human Rights Council’s recent unanimous decision to examine the human rights implications of new and developing technology like autonomous weaponry is even more significant than a pledge from robotics companies. The UN has previously been requested by a number of governments to control potential killer robots. Numerous groups and individuals, including the African Union, the European Parliament, the UN Secretary-General, Nobel Peace laureates, politicians, religious figures, and thousands of AI and robotics experts have urged for regulation, except for Australia that has not yet lent its support to these calls. Being already aware of the risks can be a good thing, even though, you know, as Murphy’s law says, “anything that can go wrong will go wrong”. Therefore, if the technology can do harm, somebody will inevitably use it for that. We can only try to mitigate the risks but we cannot get rid of them. AI and robots can be very helpful technologies as well as very dangerous weapons. That’s why we need regulation but also a technology able to contrast the dangers and able to protect us. And above all, we should be more aware of how we use technology and its consequences. [...]
October 11, 2022The next step to art creation by AIs Recently, we are seeing ever more how the power of AIs in different fields can accomplish different tasks though not always perfectly but with amazing results. Just think about the latest AI image generator tools that are spreading the internet. Day after day they can produce beautiful images even copying famous artists’ styles. Meta is now trying to do a step forward with a tool able to generate videos through Artificial Intelligence. Its new tool called Make-A-Video is available via Twitter. Although the results may look pretty weird, it would be no surprise if AI video-generation tools would overtake AI image-generation tools as a new trend. Photo by Meta However, achieving good results is not as easy as for images. An animation needs a higher degree of coherence between frames and the ability to make subjects interact and move accordingly. That’s why the error rate rises. In addition, video generation needs much more data to draw from. Anyway, albeit we are in an early stage, Meta has achieved good results, and Make-A-video can generate results with just a few words as prompt just like Dall-E or Midjourney. According to the research paper, the Meta team used an evolved version of diffusion’s Text-to-image generation model to animate images although the lack of large datasets with high-quality text-video pairs is still a problem due to the complexity of modeling higher-dimensional video data because text-to-video AI models need to be trained by huge datasets that are too large compared to those of images. Photo by Meta To generate images, diffusion models begin with noise that is generated randomly, and then they gradually adjust it to get closer to the goal prompt, but the quality of the training data has a significant impact on how accurate the outcomes are. But the amazing thing about the Meta algorithm is that doesn’t need paired text-video data and therefore doesn’t require too much data to work. Currently, Make-A-Video generates silent clips made up of 16 frames generated at 64 x 64 pixels, which are subsequently upscaled to 768 x 768 pixels using another AI model. They barely last for five seconds and only show one action or scene. According to Meta, Make-A-Video’s AI learned “what the world looks like from paired text-image data and how the world moves from video footage with no associated text”. It was trained using more than 2.3 billion text-image pairs from the LAOIN-5B database and millions of videos from the WebVid-10M and HD-VILA-100M databases. Meta claims that static images with paired text are enough for training text-to-video models since they may be used to infer movements, activities, and events. In a similar way, even without any text describing them, “unsupervised videos are sufficient to learn how different entities in the world move and interact”. The researchers acknowledged that like “all large-scale models trained on data from the web, models have learned and likely exaggerated social biases, including harmful ones”, but claimed to have done what they could to control the quality of the training data by filtering LAOIN-5B’s dataset of all text-image pairs that contained NSFW content or toxic words. One of the main problems in the industry is preventing AIs from producing insulting, false, or dangerous content. Anyway, the results look like stop-motion videos with some glitches that make them seem surreal or dreamy. The tool can be applied in a few different ways, such as to give motion to a single image, to fill in the gaps between two photos, or to create new iterations of a video based on the original. It’s not hard to imagine a future where our stories could come to life in a movie completely generated by an A.I. where not only images but also music and dialogs are created by an algorithm. That would be amazing for those who would like to have the opportunity to see what their stories would be like. But some creators may be worried this technology could steal their creativity. However, these tools could integrate with the existing creative processes adding new styles. Nonetheless, when the quality becomes hyperrealistic, it may happen anyway but the major problem will be dealing with media that look so realistic that they could be taken for real with all the risks associated. [...]
October 4, 2022A robot for everybody The first prototype of the long-awaited Tesla robot has been shown. As already claimed, the humanoid called Optimus has some of the sensors and A.I. software you can find in Tesla cars. As explained here, Musk’s expectation for this robot is to sell it for less than $20,000 hoping it will be mass produced in millions of units compared to other humanoid robots. And suddenly the movie “I, robot” comes into our minds. During the presentation, his team showed a prototype of the robot without skin called “Bumble C” that walked forward and did a dance move. However, we didn’t see much of its capabilities but the company showed some clips of other robot capabilities such as picking up boxes. Afterward, the team brought to the stage another prototype of the robot but closer to the version used for production, this time completely assembled but still not fully functional and not ready to walk. It features Wi-Fi and LTE connectivity, a 2.3kWh battery pack, and a Tesla SoC. The robot’s joints, such as its hands, wrists, and knees, were the focus of demonstrations that illustrated how data for each joint was analyzed before looking for commonalities among designs to identify a technique that required only six distinct actuators. The “Biologically Inspired Design” of the human-like hands, according to engineers, will make them better suited for picking up things of various shapes and sizes and having a precise grip on minuscule components. The Autopilot software from Tesla was transferred from its cars to the bot and redesigned to work in the new form and setting. Tesla’s motion captured humans performing real tasks, like lifting a box, and also made Optimus replicate the movements using inverse kinematics. “Online motion adaptation” is used to reduce the rigidity of these activities and enable them to be adjusted to account for an unstructured environment. “It’ll be a fundamental transformation for civilization as we know it”, said Musk. He goes on to argue that Optimus may potentially increase economic output by “two orders of magnitude.” However, Musk urged his followers not to expect the prototype to resemble the glossy black-and-white depiction first displayed at the event last year. Anyway, it may eventually become more important than the automobile company. Future uses might involve cooking, gardening, or even sex partners, according to Elon Musk, who also stated that production may begin very soon. Experts in robotics cautioned against placing too much stock in Musk’s promises in the days before AI Day. They pointed out that other companies are significantly more advanced in creating robots that can run, jump, and walk, but none of them are claiming to be near to displacing workers. Undoublty many expected more from this robot if compared to other companies like Boston Dynamics’ Atlas or the discontinued Honda’s Asimo, while its black head may look a little disturbing. However, its moves look smoother than those of Xiaomi’s robot. Anyway, robot production is still in its infancy if you think about where A.I. is going. We don’t know the evolution of this robot but it will probably be one of the first to enter our homes. Will robots replace human labor? When it comes to robots, the first topic that comes out is human labor. However, it’s strange that people fear losing repetitive and boring jobs instead of thinking of a more fair society where we are less overwhelmed by a job that doesn’t improve us or we simply don’t like. If robots could replace those jobs we could focus on more creative, pleasant jobs and avoid the more dangerous ones. But we should imagine a world where we work less and we’re less frustrated and where primary goods are guaranteed to everyone. Robots, for example, could be taxed to redistribute wealth. In the next future, we’ll need to work less and better not to work hard and help those who can’t work or lose their job. We need a system where everybody lands on their feet. It should be made easier work placement, even for self-employed people but nobody should fear losing a job because when you’re cut off there’s always help. [...]
September 27, 2022A powerful technology not without risks Richard Feynman, a physicist, first described nanotechnology to the public in 1959. He defined it as synthesis through the reconstitution of atoms and molecules. At this scale, referred to as the nanoscale, nanotechnology includes science, medicine, engineering, computer, and robotics. Nanotechnology has seen major advancements and excellent new uses every year. Energy, robotics, agriculture, health, computation, military intelligence, and manufacturing have all experienced amazing progress. These are just a few examples of the fields where nanotechnology has made significant strides. Nanotechnology can rescale and manipulate particles to produce chemical bonds that can be hundreds of times more powerful than steel. These bonds expand a material’s surface area so that more atoms can interact with it, which increases the material’s strength, conductivity, and malleability relative to its naturally-sized equivalents. A nanotech product’s density, lightness, size, transparency, ability to reflect waves, or absorption of waves depend on how the particles are handled: nanomaterials are the products of particle manipulation. Nanomaterials According to this article, nanomaterials are classified into two main categories: naturally occurring (such as blood hemoglobin) and artificially developed (such as quantum dots). There are four main categories of artificially produced nanomaterials: dendrimers, metal-based, carbon-based, and nanocomposites. Dendrimers either expand outward from a strong core or inward from a solid outer shell, whereas carbon-based and metal-based nanomaterials are formed through the chemical manipulation of elements to derive micro-matter constructs, and nanocomposites combine different nanomaterials and larger-scale high-volume materials. A nanometer, or one billionth of a meter, must be used in engineering for a substance to be categorized as a nanomaterial. Our daily lives already include a significant amount of nanotechnology. For instance, in recent years, lightweight road, sea, air, and space vehicles have been developed using nanotechnology which has improved imaging equipment, diagnostic methods, and even aspects of medicine itself, such as the delivery of antigens to damaged cells while avoiding healthy ones. Nanodevices Nanobots are tiny machines that have been designed to carry out a specific activity. They have been significant in many of the important modern developments: in virology, clean energy, water filtration, and 3D printing. They have been functional on both bioorganic materials and inorganic matter. Nanobots have a variety of uses, including delivering medications, moving collectively to increase the gathering of wind and solar energy, cleaning contaminated water, and connecting collectively to reproduce a 3D object and carry out its intended function. Self-repair of structural surfaces is currently being tested. The ability of nanotechnology to attach to deteriorated roads, bridges, and trains to repair structural problems and material deficiencies could be significant for transportation infrastructure. Enzyme synthesis is also underway, along with the development of synthetic ethanol. Ethanol is a limited resource that is naturally generated from fossils and is used for a variety of purposes, such as fuel, a binder for personal care products, and household cleaners. Another area of investigation for which nanotech is currently looking for real-world testing is robust rechargeable industrial battery systems. Imagine being able to produce an endless supply of electricity. This may become feasible in the near future thanks to nanobots used as self-adaptive sensors that collaborate with nanomaterials created into self-servicing generators capable of supplying cities with eco-friendly energy. Nanochips, which can fit the memory of your computers and phone on minuscule storage devices, are another discovery. Since nanotransistors have been used in commercial applications since 2014, this advancement might not be too far off. Nanotechnology is also being used in gene-sequencing, genetic engineering, the research of tissue and organ regeneration, and the eradication of diseases. Even though it is one of the applications of nanotechnology that is the furthest from being implemented practically, it holds great potential. In the future, we may be able to construct gene sequencing to help eradicate inherited diseases and swap out the sequences for beneficial features. Although it could be argued that nanotechnology is already having an impact, we are only at the beginning of its development. For instance, the combination of A.I. and nanotechnology has long been postulated for its possible advantages in predicting and managing space exploration, resolving, and managing environmental catastrophes through the analysis of universal patterns and behaviors. Applications to eradicate climate issues or create new climatic systems on otherwise habitable worlds are conceivable, despite being a long way off. Pros and cons Nanotechnology will surely brings several advantages with it. It will revolutionize many areas of manufacturing with the construction of new materials with specific properties (such as resistance or the ability to react to external events being able to self-repair). In addition, it will also be a useful technology to producing and handling energy but also for saving costs. For instance, quantum dots are small light-producing cells that could be employed for display screens or illumination. Not to mention the advances in medicine where to remove obstructions, nanobots could be injected into the arteries of a patient and surgery could become considerably more efficient and precise, injuries could be corrected. However, we can’t fail to mention the possible negative effects of this technology. In fact, the most dystopian theories suggest that a scenario known as “gray goo“, in which self-replicating nanobots consume everything around them to create copies of themselves, will eventually come to pass. Others thinks that the manufacturing changes will impact jobs but the most scariest predictions include using nanotechnology as invisible weapon to spy people or even kill them. While genetic manipulation could lead to ethical issues. And finally, think about the power of A.I. along with the power of nanotechnology what could do. [...]
September 20, 2022Virtual and augmented reality may be the next weapon for manipulating people The metaverse is going to be the next big approach we’ll have to the internet and it will be more powerful since it will bring us into a parallel world involving almost all our senses. According to a recent McKinsey study, many people anticipate using the metaverse for longer than four hours per day during the next five years. Virtual and augmented worlds, which will overlap but develop at different rates and involve various players using probably distinct business models, will be the two main components of the metaverse. Gaming and social media will give way to the virtual metaverse, which will create whole virtual worlds for short-term activities including socializing, shopping, and entertainment. The mobile phone industry instead, will give rise to the augmented metaverse, which will enhance the actual world with immersive virtual overlays that integrate artistic and educational content into our daily lives at home, work, and in public settings. According to this article, whether virtual or augmented, this will give metaverse platforms incredible power to track, profile, and influence consumers at levels much beyond anything we have seen so far. This is because metaverse platforms would not only track user clicks but also their movements, activities, contacts, and viewing habits. In order to determine whether users slow down to browse or speed up to pass places they are not interested in, platforms will also monitor posture and gait. In addition, they will even be able to track what items you pick up off the shelf in real or virtual stores to examine, how long you spend studying a product, and even how your pupils dilate to signal your level of interest. Because the device for this technology will be worn during daily life in the augmented metaverse, the ability to track gaze and posture creates special security issues. Platforms will be aware of which store users look through, how long they gaze inside, and even which parts of the display catch their interest as they stroll through real streets. Additionally, gait analysis can be used to diagnose psychological and physiological disorders. Additionally, metaverse systems will monitor users’ vital signs, facial expressions, and vocal inflections while A.I. algorithms analyze their emotions. This will be used by platform providers to give avatars more realistic facial expressions and a more realistic look. While these characteristics are helpful and humanizing, if there were no restrictions, the same information may be used to develop emotional profiles that show how people respond to various stimuli and circumstances in their daily lives. Virtual Product Placements vs Virtual Spokespeople The dangers increase when we take into account that behavioral and emotional profiles may also be employed for targeted persuasion. Invasive monitoring is obviously a privacy concern. Consequently, two distinctive types of metaverse marketing are likely to be included such as: Virtual Product Placements (VPPs): simulated goods, services, or activities that are inserted into an immersive environment (virtual or augmented) on behalf of a sponsor in exchange for payment such that the user perceives them as organic components of the surrounding landscape. Virtual Spokespeople (VSP): refers to deepfakes or other characters that are inserted into immersive environments (virtual or augmented) and verbally convey promotional content on behalf of a paying sponsor, frequently engaging users in promotional conversation that is moderated by A.I. Consider Virtual Product Placements being used in a virtual or augmented city to understand the impact of these marketing strategies. While product placements are passive, Virtual Spokespeople can be active, engaging users in promotional conversation on behalf of paying sponsors. While such capabilities seemed out of reach just a few years ago, recent breakthroughs in the fields of Large Language Models (LLMs) make these capabilities viable in the near term. The verbal exchange could be so authentic, that consumers might not realize they are speaking to an AI-driven conversational agent with a predefined persuasive agenda. This opens the door for a wide range of predatory practices that go beyond traditional advertising toward outright manipulation. It is not a new issue for social media sites and other tech services to track and profile consumers. However, the scope and level of user surveillance will drastically increase in the metaverse. Propaganda and predatory advertising are also not recent issues. However, consumers can find it challenging to distinguish between genuine experiences and targeted commercial content that has been injected on behalf of paying sponsors in the metaverse. As a result, metaverse platforms would be able to readily alter user experiences without the users’ knowledge or consent on behalf of paying sponsors. Authenticity Advertising is widespread everywhere in the real and digital world. As a result, people can view advertisements in their correct context, as paid messages sent by someone trying to influence them. Consumers can apply healthy skepticism and critical thinking when evaluating the goods, services, political viewpoints, and other information they are exposed to thanks to this context. By blurring the boundaries between genuine experiences that are chance encounters and targeted promotional experiences injected on behalf of paying sponsors, advertisers could undermine our capacity to interpret promotional content in the metaverse. This has the potential to easily transcend the line from marketing to deceit and turn into predatory conduct. All the environment would be staged therefore what you see and what you hear would be for commercial purposes and personalized for the maximum impact being based on your profile. The purpose of this type of this sly advertising may appear to be benign but the same strategies and tactics might be employed to promote experiences that underpin political disinformation, propaganda, and outright lying. Therefore, immersive marketing strategies like virtual product placements and spokespeople must be controlled to safeguard consumers. Regulations should at the very least safeguard the fundamental right to real experiences. This could be accomplished by mandating that promotional artifacts and promotional individuals be overtly distinguishable physically and acoustically so that consumers can understand them in the appropriate context. This would shield customers from mistaking experiences that were manipulated for commercial purposes for real encounters. Emotional privacy The capacity to convey emotions through our words, posture, gestures, and faces has evolved in humans, therefore we have developed the ability to recognize these characteristics in others. This is a fundamental mechanism of human communication that runs concurrently with spoken language. Recently, software systems have been able to recognize human emotions in real-time from faces, voices, posture, and gestures as well as from vital signs like respiration rate, heart rate, eye motions, pupil dilation, and blood pressure. This is made possible by sensing technologies combined with machine learning. Even the patterns of facial blood flow picked up by cameras can be employed to decipher emotions. While many see this as giving computers the ability to communicate with people nonverbally, it is very easy to cross the line into intrusive and exploitative privacy violations. This is due to sensitivity and profiling, respectively. Computer systems are already sensitive enough to recognize emotions from cues that are imperceptible to humans. For instance, blood pressure, respiration rate, and heart rate are difficult for humans to notice, therefore those signals may be expressing feelings that the subject of the observation did not intend to express. Computers can also recognize “micro-expressions” on faces, which are fleeting or subtle enough that humans can miss them but which, like before, can reveal feelings that the person being observed did not want to portray. People should at the very least have the right to avoid having their emotions evaluated quickly and with trait detection that is better than what is typically possible for humans. This excludes the use of physiological indicators and micro facial expressions in emotion recognition. Additionally, the risk to users is increased by platforms’ capacity to store emotional data over time and develop profiles that could enable A.I. systems to forecast how consumers will respond to a variety of stimuli. AI-driven Virtual Spokespeople that converse with people about products probably have access to vital signs, voice inflections, and facial emotions in real-time. Without regulation, these conversational machines might be programmed to change their marketing strategies mid-conversation based on the emotions of their target audience, even if those feelings are subtle and impossible for a human to pick up on. Behavioral privacy Most of the consumers are aware that major tech companies monitor and profile their online activities, including the websites they visit, the advertising they click, and the social network connections they make. Large platforms will be able to track user activity in an unregulated metaverse, including not just where users click but also where they go, what they do, who they are with, how they move, and what they look at. All of these actions can also be analyzed for emotional reactions to track not just what people do, but also how they feel while doing it. For platforms to deliver immersive experiences in real-time in both the virtual and augmented worlds, many behavioral data are required. Nevertheless, the information is only required while these events are being recreated. There is no intrinsic need to store this data over time. This is significant because the storing of behavioral data can be used to build intrusive profiles that meticulously detail the everyday activities of specific users. Machine-learning systems may quickly scan this data to make predictions about how specific users would behave, respond, and communicate in a variety of day-to-day situations. In an unfettered metaverse, it may become the ordinary practice for platforms to precisely forecast what users will do before they decide. Furthermore, as platforms will be able to change their surroundings in order to persuade users, profiles could be employed to accurately modify their behavior in advance. For these reasons, legislators and regulators ought to think about preventing metaverse platforms from collecting and retaining user behavior information over time, preventing platforms from creating comprehensive profiles of their users. Additionally, it should not be allowed for metaverse platforms to link emotional data to behavioral data because doing so would enable them to deliver promotionally altered experiences that not only control what users do in immersive worlds but also predictably influence how they feel while doing it. Our society will be significantly impacted by the metaverse. Although there will be numerous beneficial impacts, we must safeguard users from any potential risks. Promoting altered experiences poses the greatest nefarious threats since such strategies could offer metaverse platforms the ability to manipulate their users. Every person should, at the very least, be free from being emotionally or behaviorally profiled while they go about their daily lives. Every person should have the freedom to believe that their experiences are genuine without fear of others sneakily introducing targeted advertising into their environment. These rights must be protected immediately, or else no one will be able to trust the metaverse. All the risks that come with the internet and A.I. are exponential when it comes to the metaverse. If social media can keep us glued to the screen, the metaverse will remove the boundaries between real and unreal, taking advantage of all our perceptions. And if you think the AIs will know us increasingly better over the years, the risks of becoming puppets of someone else’s show are not that unreal. [...]