Information

About the possibility of 'Robot Psychology'

About the possibility of 'Robot Psychology'


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

If a Robot system with an advanced A.I. operating system had the 'ability' to 'do"undirected' or 'semi-undirected' introspection and 'form' all sorts of 'thought' structures about any subject matter (that didn't directly have something to do with important 'operating' functions) could these not-necessarily important 'thought structures' and the 'managing' of such cause something analogous to psychological problems in the robot?


Depending on the definition of "psychology" and "psychological problems" sure. The APA defines psychology as "the scientific study of the behavior of individuals and their mental processes." If we take this to only apply to biological systems (as I am sure many researchers do) then no, because the robot is not a biological system and therefore we can not study it under the heading of psychology and we would probably study it under something more akin to "engineering". However, if one is happy applying the same to robots, as I assuming you are, then yes.

Assuming the latter, and if you think that a being's "'ability' to 'do"undirected' or 'semi-undirected' introspection and 'form' all sorts of 'thought' structures about any subject matter" is a cause of psychological problems, and we are further assuming that a robot could do this, then yes, the robot will have psychological problems by definition. (Is there direct evidence suggesting these are causes though?)


Dr. Jeff Hancock is founding director of the Stanford Social Media Lab and a professor in the Department of Communication at Stanford University. Hancock works on understanding psychological and interpersonal processes in social media. His research team specializes in using computational linguistics and experiments to understand how the words we use can reveal psychological and social dynamics, such as deception, trust, intimacy and social support. Hancock is well known for his research on how people use deception with technology from sending texts and emails to detecting fake online reviews.


A concept in psychology is helping AI to better navigate our world

The concept: When we look at a chair, regardless of its shape and color, we know that we can sit on it. When a fish is in water, regardless of its location, it knows that it can swim. This is known as the theory of affordance, a term coined by psychologist James J. Gibson. It states that when intelligent beings look at the world they perceive not simply objects and their relationships but also their possibilities. In other words, the chair “affords” the possibility of sitting. The water “affords” the possibility of swimming. The theory could explain in part why animal intelligence is so generalizable—we often immediately know how to engage with new objects because we recognize their affordances.

The idea: Researchers at DeepMind are now using this concept to develop a new approach to reinforcement learning. In typical reinforcement learning, an agent learns through trial and error, beginning with the assumption that any action is possible. A robot learning to move from point A to point B, for example, will assume that it can move through walls or furniture until repeated failures tell it otherwise. The idea is if the robot were instead first taught its environment’s affordances, it would immediately eliminate a significant fraction of the failed trials it would have to perform. This would make its learning process more efficient and help it generalize across different environments.

The experiments: The researchers set up a simple virtual scenario. They placed a virtual agent in a 2D environment with a wall down the middle and had the agent explore its range of motion until it had learned what the environment would allow it to do—its affordances. The researchers then gave the agent a set of simple objectives to achieve through reinforcement learning, such as moving a certain amount to the right or to the left. They found that, compared with an agent that hadn’t learned the affordances, it avoided any moves that would cause it to get blocked by the wall partway through its motion, setting it up to achieve its goal more efficiently.

Why it matters: The work is still in its early stages, so the researchers used only a simple environment and primitive objectives. But their hope is that their initial experiments will help lay a theoretical foundation for scaling the idea up to much more complex actions. In the future, they see this approach allowing a robot to quickly assess whether it can, say, pour liquid into a cup. Having developed a general understanding of which objects afford the possibility of holding liquid and which do not, it won’t have to repeatedly miss the cup and pour liquid all over the table to learn how to achieve its objective.


Robots as therapists and companions

Socially assistive robots also show promise in providing therapy to children with autism. Research has found that engaging with therapy robots increases engagement, attention and novel behaviors (such as spontaneously imitating the robot) among children with autism, as Scassellati and colleagues described in a review (Annual Review of Biomedical Engineering, Vol. 14, No. 1, 2012).

"Human behavior can be overwhelming for children with autism, yet a social robot can provide a sort of manageable social experience," Breazeal says. "You can start to see children exhibit these social skills and capabilities that you may not have seen them perform before with another human being, and then the clinician can start to build on that."

Scassellati is heading a large project funded by the National Science Foundation to further explore robotic therapy for children with autism. The robots spend a month in the child's home. They analyze and adapt to each child's behavior, tailoring their interactions to suit the child's abilities, preferences and behavioral goals.

So far, eight families have completed the protocol and half a dozen more have welcomed the social machines into their homes. Though the data are still being collected, Scassellati is hopeful. "Anecdotally, the families love them. These children are doing hours of therapy a day, and the robot makes therapy fun," he says. "One mother told us that she's learned new ways of doing things thanks to the robot, and she plans to keep doing them that way. That was fantastic for us."

At the other end of the life­span, older adults are using robots for comfort and companionship. The robotic harp seal Paro wiggles his flippers, displays a variety of emotions and responds to a user's touch and voice. A pilot study by Wendy Moyle, PhD, RN, and colleagues found that older adults with dementia who regularly spent time with Paro had higher levels of pleasure and greater quality of life than peers who participated in a reading intervention (Journal of Gerontological Nursing, Vol. 39, No. 5, 2013).

Seals may be just the start. Breazeal predicts that more humanlike robots might one day provide companionship to older adults. "Elder care is a huge area of opportunity, and chronic loneliness is an epidemic in our society. I think technology will be one part of the solution."

Last year, a company Breazeal heads, called Jibo, released a tabletop robot of the same name that can tell jokes and stories, play games and otherwise interact with the people around it. The robot uses facial and vocal recognition to get to know each member of the family, adapting its interactions to each individual. "Jibo is designed to interact and express its own internal states, such as its likes and dislikes," she says. "It's the very beginning of bringing these technologies out into the world, but it's opening the avenue to socially intelligent robots."

APA is hosting Technology, Mind and Society, an interdisciplinary conference exploring interactions between humans and technology, on April 5–7, in Washington, D.C. For more information or to register, visit https://pages.apa.org/tms.

Additional reading

What Kind of Mind Do I Want in My Robot? Developing a Measure of Desired Mental Capacities in Social Robots
Malle, B.F., & Magar, S.T. Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 2017

Moral Judgments of Human vs. Robot Agents
Voiklis, J., Boyoung, K., Cusimano, C., & Malle, B. Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, 2016


Would You Feel Jealous if Your Partner Got a Sex Robot?

I get asked about the future of sex a lot, and one of the topics that often comes up is how robots are going to change our intimate lives and relationships. Admittedly, there’s not a lot of research out there on this subject yet because sex robots are still in their infancy, so it’s hard to make concrete predictions. However, there has been some work looking at people’s attitudes toward sex robots and the results suggest that many people aren’t enthusiastic about the idea of sex robots—and, in particular, the idea of their partner using one.

In a recent study, researchers surveyed 277 adults (average age of 27 90% of whom were heterosexual) about their attitudes toward futuristic sex robots. They used an experimental design in which half of the participants were asked about a “sex robot” that “looks and feels just like humans,” whereas the others were asked about “a platonic love robot” with no physical body that is capable of providing “meaningful romantic and friendly relation to a human.”

They were then asked how they felt about the idea of their partner owning such a robot, as well as how they think their partner would feel if the shoe were on the other foot (i.e., would your partner be cool with you owning one of these robots?)

In general, participants thought the idea of both robots sounded realistic however, while there was quite a bit of variability in people’s feelings, attitudes toward both robots were negative overall.

Overall, men reported more positive attitudes toward robots than did women, but this was mainly true for the sex robot. When looking at the platonic love robot, men’s and women’s attitudes were fairly similar.

Regardless of robot type, men expected to feel less jealous if their partner got one compared to women. Also women reported higher feelings of jealousy about a sex robot compared to a love robot, which is contrary to what the researchers had expected to find (some past jealousy research has found that women report more emotional than sexual jealousy, which was the basis for their prediction).

When thinking about their partner owning a robot, men reported similar levels of dislike for both the sex and love robot by contrast, women reported more dislike for their partner using a sex robot than a love robot.

Lastly, when thinking about how your partner would respond if you owned a sex robot, men expected similar levels of dislike from their partner for both types of robots, whereas women anticipated more partner dislike if they used a sex robot than a love robot.

Of course, these findings are limited because they’re based on how people feel about hypothetical sex/love robots. Most people just don’t have any personal experience with this technology yet. Also, people may have ideas of how they’d feel about robots that are shaped by popular media depictions, where it’s much more common to see female rather than male sex robots and where using these robots often ends in disaster (ever see Ex Machina or Westworld?)

That said, these results point to important differences in how people feel about robots based on robot type (sex vs. love) and an individual’s gender. They also suggest that robots may eventually become a potential source of conflict in relationships to the extent that partners have different feelings about them. And perhaps this conflict will be more pronounced in mixed-sex (male/female) relationships due to differences in men’s and women’s attitudes, but less so in same-sex relationships (where partners’ attitudes may be more in line with each other).

An interesting line of inquiry for future research would be to ask how people feel about using sex/love robots with a partner, as opposed to only one partner using them. When robots are framed as a substitute or replacement for a real-life partner, that may evoke more negative attitudes and jealousy. But what about when robots become a complement to a couple’s sex life, offering a new way for partners to interact together? For example, are people in relationships more open to the idea of robot threesomes rather than solo robot use?

Also, what if sex/love robots were framed as a way of fixing a sexual or relationship problem? For example, if two partners want drastically different amounts of sex, could a sex robot be a viable solution to this problem and potentially reduce conflict and prevent infidelity? Or what about using sex/love robots in long-distance relationships where partners see each other very infrequently as a way of providing more sexual and intimate connection?

So what do you think? How would you feel about the idea of using a sex vs. love robot yourself? And how would you feel if your partner used one?

Want to learn more about Sex and Psychology? Click here for more from the blog or here to listen to the podcast. Follow Sex and Psychology on Facebook, Twitter (@JustinLehmiller), or Reddit to receive updates. You can also follow Dr. Lehmiller on YouTube and Instagram.

To learn more about this research, see: Nordmo, M., Næss, J. Ø., Husøy, M. F., & Arnestad, M. N. (2020). Friends, lovers or nothing: Men and women differ in their perceptions of sex robots and platonic love robots. Frontiers in psychology, 11, 355.


Robotic Positive Psychology Coach for College Students

A significant number of college students suffer from mental health issues that impact their physical, social, and occupational outcomes. Various scalable technologies have been proposed in order to mitigate the negative impact of mental health disorders. However, the evaluation for these technologies, if done at all, often reports mixed results on improving users’ mental health. We need to better understand the factors that align a user’s attributes and needs with technology-based interventions for positive outcomes. In psychotherapy theory, therapeutic alliance and rapport between a therapist and a client is regarded as the basis for therapeutic success.

In prior works, social robots have shown the potential to build rapport and a working alliance with users in various settings. In this work, we explore the use of a social robot coach to deliver positive psychology interventions to college students living in on-campus dormitories. We recruited 35 college students to participate in our study and deployed a social robot coach in thei&hellip View full description

A significant number of college students suffer from mental health issues that impact their physical, social, and occupational outcomes. Various scalable technologies have been proposed in order to mitigate the negative impact of mental health disorders. However, the evaluation for these technologies, if done at all, often reports mixed results on improving users’ mental health. We need to better understand the factors that align a user’s attributes and needs with technology-based interventions for positive outcomes. In psychotherapy theory, therapeutic alliance and rapport between a therapist and a client is regarded as the basis for therapeutic success.

In prior works, social robots have shown the potential to build rapport and a working alliance with users in various settings. In this work, we explore the use of a social robot coach to deliver positive psychology interventions to college students living in on-campus dormitories. We recruited 35 college students to participate in our study and deployed a social robot coach in their room. The robot delivered daily positive psychology sessions among other useful skills like delivering the weather forecast, scheduling reminders, etc. We found a statistically significant improvement in participants’ psychological wellbeing, mood, and readiness to change behavior for improved wellbeing after they completed the study. Furthermore, students’ personality traits were found to have a significant association with intervention efficacy. Analysis of the post-study interview revealed students’ appreciation of the robot’s companionship and their concerns for privacy.

In comparison to task-oriented health technological tools, a social robot has unique opportunities to build therapeutic alliance with its users and to leverage that rapport to further enhance the effectiveness of the interventions it provides.


Discussion

Through the suggestions and problems put forward by the research objects, it is found that the lack of teaching resources, the shortage of teachers, and the inadequate consideration of the details in teaching design and teaching methods will create certain losses in terms of the learning experience, learning process, and even learning results. Therefore, the teaching team must pay enough attention to this point and fully grasp all aspects of teaching design in future teaching and learning activities. Of course, the existence of problems is the best motivation for self-improvement, and the suggestions put forward by students are the most valuable wealth for the improvement of teaching. While the learners are required to grow up, educators also need to continually develop to ultimately achieve the common growth of both teachers and students. The expectation expressed by the learners is the greatest affirmation to the teaching team. They have learned something during the experimental course and expect to have more opportunities to learn in the future. This shows that the teaching team's contribution is valuable, and the teaching team is also full of a sense of achievement so that the teaching team has a greater passion for continuing to improve the teaching design and students can acquire more knowledge and skills.

The four aspects can be summarized as three points. First, the research objects give a positive evaluation of the whole teaching and learning activities. In the reflection diary, learners clearly describe the teaching content and experimental tasks, actively participate in teaching and learning activities, and recognize the teaching design and teaching implementation of the teaching team. Second, the object of study has gained and learned something. During the course, learners not only enrich the knowledge system and cognitive structure but also get inspiration from daily life and cultivate their own innovative thinking. At the same time, it has cultivated the creative consciousness, improved practical ability, and enhanced the sense of teamwork. Third, in view of the shortcomings in the teaching process, some constructive suggestions are put forward. These suggestions can urge the teaching team to design teaching more carefully, provide high-quality teaching resources, and constantly optimize teaching and learning methods so as to promote meaningful learning of learners.


OPINION article

  • Research Unit on Theory of Mind, Department of Psychology, Catholic University of the Sacred Heart, Milan, Italy

The coronavirus disease 2019 (COVID-19) pandemic has prompted much research on the possible use of robots in different areas of intervention. One of them is related to the deployment of social robots to cope with different needs elicited by and depending on the emergency. According to a recent article published in Science (Yang et al., 2020, p. 1) “social robots could be deployed to provide continued social interactions and adherence to treatment regimens without fear of spreading disease.” In this context, social isolation and quarantine—often significantly prolonged due to the duration of the infection—have plausibly exerted a negative impact on well-being and perhaps mental health, whose jeopardy was even more likely if a previous psychological vulnerability was present. If historically robots have been employed in dangerous and risky duties, presently, some of the most promising domains of robots' development also include rehabilitation, caring, and educational and clinic intervention. We are witnessing a shift from the concept of “robot as slaves” to “robots as companions, nurses, teachers…” that, in a word, behave, interact, and work “like us” (cfr. Marchetti et al., 2018). Yang et al. argue that social robots used to �herence to treatment regimens without spreading of fear” need to be implemented following sophisticated human models, including mental states like emotions and beliefs, as well as the context and environment of the interaction (p. 2). In our opinion, the 𠇎nvironments” are the affordances strictly linked to survival in an evolutionary sense. The 𠇌ontext” is represented by everyday life socio-material and socio-cognitive cues. Furthermore, we believe that the implementation of social robots based on every possible human model cannot merely be the product of 𠇊 fusion of engineering and infectious disease professionals” (Yang et al., 2020, p. 2). The model would require an interdisciplinary perspective that includes also the contribution of psychologists. The recent pandemic has in fact laid the foundations for rereading our daily relationships from the point of view of not only human relations but also other agents, such as robots. In the present Opinion, we therefore suggest that the use of robots is not only a purely technical issue but also supported by important changes in the way we view relationships, particularly with those who are close to us. With this aim in mind, we focused on identifying some psychological components most subject to change due to the current global situation. Let's take, for example, the emotion of fear mentioned above. Fear will probably take (if not already has) a different form because of the virus. Fear is a primary (Ekman and Friesen, 1971) and adaptive emotion developed through evolution to enable coping with danger and ensure survival. Predators, contaminants, and invaders are the potentially dangerous enemies that are all risky variables toward which close relationships usually act as protective factors. In case of fear, the options for the individual are represented by the so-called 𠇏ight or flight” behaviors. On the relational level, it is the search for a secure base (Bowlby, 1988), where a place can be found for reassurance and affective supply. This tendency persists also in adulthood due to the transgenerational transmission of attachment patterns.

Nonetheless, COVID-19 pandemic confronted us with a scenario where �r has no face.” Now, it also involves close relationship partners, i.e., people who potentially are sources or recipients of care. This profoundly contrasts with a series of fundamental developmental achievements that make physical proximity the embodied prototype of psychological proximity. The individual undertakes a path in which the “known social other”/“unknown social other” dichotomy acts as an organizer of beliefs and attitudes, thus contributing to the construction of the Self as a distinct and separate entity from the Other. From a sensorineural point of view, the human baby is equipped to recognize and trustfully orient herself/himself toward primary figures of care and protection it is precisely on this basis that trust is built in others and ourselves (Di Dio et al., 2019, 2020a,b Manzi et al., 2020a,b). The so-called 𠇊nguish of the stranger” (Spitz, 1945 Schaffer, 1966) emerges around 8 months of age. It marks the distinction between the caregivers and all the others: before becoming a neutral agent that the child will observe and know, the “other” per se is perceived as scary (worthy of fear in other words). This step appears to be in line with the older child's behavior observed within the Strange Situation (a paradigm aimed at evaluating attachment Ainsworth et al., 1978): the response of distress and fear toward the stranger, who is generally more accepted if the mother is at the child's presence, and the reactions toward whom are predicted by the security of the child's attachment to the mother. Later in life, the developing child can establish attachment bonds with other people in her/his life contexts: friends, schoolmates, relatives of the extended family, teachers, and educators in various contexts, from school to sports activities (Pianta, 1999). While the theoretical perspective of multiple attachments postulates that the widening of the “known social other” sphere is characterized by a differentiation of the functional roles played by multiple relationships, it maintains the fundamental developmental ability to identify the other as a “secure-safe social partner,” distinguishing him/her from the “risky-unsafe social partner.” The possibility to create multiple attachments prevents a series of developmental risks and acts as an enhancer of positive primary attachment relationships and as a vicarious protective factor in the conditions of relational affective fragility. Besides, not only are secure relationships with multiple figures—with the teacher, just to give an example𠅌onnected with the personal well-being within the affective sphere, but also with cognitive performance at school, as well as with socio-cognitive indexes like school climate, peer acceptance, and so on. In order to exert an enhancing-protective role, all these “others” (educators, teachers, relatives) have to be perceived as �sides me.” The physical sense of �sides” —in its literal meaning𠅊nticipates in development, and continues to support in the life span, the metaphorical sense of the human experience of psychological closeness and proximity. And it is precisely the impossibility to fully get the chances offered by the different meanings of �sideness” (physical proximity and security/safeness) that is responsible for the erosion of the feeling of being protected from fear within the contexts of affective bonds. Although technology allows us to be connected even when physically separated, the experienced loneliness and isolation largely reported during COVID-19 may depend both on the technological inability to embody affective relationships and perhaps also on more or less implicit awareness that “the known social other” (also my caregiver, daughter-son-teacher-girlfriend/boyfriend-teacher, educator) could be dangerous for me. Consequently, the pervasive mood of close relationships is no longer that of security but rather a widespread sense of fuzzy fear (Furthermore, if people reflect on the possibility of being an active agent of contagion for their beloved ones, the basic emotion of fear should be added to the complex emotion of potential fuzzy guilt). So, if in-group/out-group dynamics—up to the attitudes toward the “stranger” in a geographical and political sense (Antonietti and Marchetti, 2020)𠅊re the result of this primary articulation according to which “known-familiar” equals to reliable and “unknown-unfamiliar” equals to potentially dangerous (danger from which—phylogenetically and ontogenetically—the “known-familiar” is in charge of protecting us), the effect of the fuzziness of emotions, and especially of fear on mental health in a stressful situation like the one represented by the COVID-19 pandemic, can be easily imagined. In fact, the COVID-19 pandemic implies the possibility of indiscriminate contagion by anyone, including those closest to us in a psychic sense. Because of this, it undermines the dynamics depicted above by eliciting an unprecedented form of fear, in which the boundaries between safety and risk fall. If infected, it is necessary to adhere to the rule of indiscriminate social distancing from everyone. The same applies if a relative is infected. The work of mercy to “visit the sick” cannot be accomplished, just as it is impossible to extend the final farewell to those who left us forever. In a word, COVID-19 has completely changed the physiognomy of security/trust/danger/risk and fear, suddenly destroying a bond that evolution and ontogenetic development have taken a long time to build. The feelings of neglecting if not abandoning the beloved ones, or to be neglected if not abandoned by them to ensure the protective purposes of social distancing, are not easy to be managed from a psychological point of view the experience of isolation, loneliness, and the worry of being forgotten are difficult to explain and to make comprehensible for children as well as the elderly. This is to say that the erosion of the foundations of the distinction between “known-familiar-safe/unknown-stranger-unsafe” could vary according to the developmental phases of the individual as well as the status of experts/novices. In terms of developmental phases, the cognitive, social, and affective resources typical of specific ages allow children to assimilate and elaborate differently information about the virus, its effects, and the dangers of proximity to beloved people. On the other end, if viewed from the perspective of expert/novices status, which is partially connected with the developmental phases, to have reliable information or real scientific knowledge on the spread of the virus could help to better manage the effect of the new form of fuzzy fear. Going back to the role played by robotics within the psychological framework briefly outlined here, the use of robots may change depending on a series of factors that only the contribution of psychologists may help to highlight. First of all, the “like me experience,” which represents the basis of acceptance/refusal of social robots, changes with age. Like the people's sense of people (to paraphrase Legerstee, 2005), also people's sense of social robots depends on the development, as well as the aims and contexts, of the robots' use (Marchetti et al., 2018). For these reasons, it is fundamental that the design of social robots meant to be deployed in situations of 𠇏uzzy fear” like the one we are experiencing not only includes the purposes of assistance, companionship, or tutoring associated with medical regimens but also takes the real role of �r-free” mediators of affective functions. In this way, robots do not become substitutes for close relationship partners from whom social distancing separates us, but act as relational bridges between those who are separated for health and safety reasons. As an effect of this rethinking the functions of social robots in emergency situations, some current negative attitudes toward social robots𠅏rom resistance and ambivalence up to the uncanny valley phenomenon (Mori, 1970 MacDorman and Ishiguro, 2006)𠅌ould significantly change. To pursue the goal of designing useful social robots for the psychological needs described here (i.e., coping with fuzzy fear and taking advantage of robots as affective mediators), a deep, psychologically driven afterthought will be needed around three basic axes of reflection. The first two axes are more general. The first one regards the psychological understanding of people involved in human–robot interactions during a sanitary emergency in terms of level of development, socio-demographic characteristics, and previous experience with social robots (see the experts/novices distinction above). Expectations and attitudes toward social robots may in fact change according to both development and expertise. The second axis regards the construction of social robots that are able not only to take into account the needs of their human partners but also to relate with the human agent in an understandable way. This represents an extremely important feature that every human would expect from the interactive experience. The literature on robotics calls it “transparency”/𠇎xplainability” (Holzinger et al., 2019), which would correspond to the experience of the Theory of Mind (Perner, 1991 Wellman et al., 2001) in the domain of human–human interaction. The third axis of reflection relates to a goal that we hope to achieve in a not too distant future. Specifically, it concerns the identification of the best way to devise social robots that are able to sensitively manage and respond to the behavior of a human partner with a possible acute temporary breakdown in the ability to scaffold the sense of emotional security—like some of us during this COVID-19 emergency—that is the very basis of Self construction.

The theoretical reflections discussed in this Opinion reread therefore the question of fear in the light of a danger that poses new questions and that, as is suggested, leads to rethinking particular psychological and social dynamics. In reading the new relational dynamics hypothesized in the present work, from which the robot is spared, COVID-19 pandemics added novelty to the physiognomy of fear, which (unlike anxiety) is an emotion linked to objects and situational antecedents, and which may therefore be affected by the nature of its objects at the level of subjective experiences, behavioral reactions, as well as coping strategies. These theoretical suggestions may enrich knowledge from an interdisciplinary perspective, such as robotics and psychology, providing important starting points for future research by emphasizing which psychological components should be investigated in people interacting with robots. An example is the perception of in-group/out-group, as well as the components of fear that, in our opinion, are mitigated toward robots in the specific COVID-19 situation, which forces us to adapt to the inclusion of new social agents devoted to care assistance.


Could a Rising Robot Workforce Make Humans Less Prejudiced?

Jackson, J., Castelo, N. & Gray, K. (2019).
American Psychologist. (2019)

Automation is becoming ever more prevalent, with robot workers replacing many human employees. Many perspectives have examined the economic impact of a robot workforce, but here we consider its social impact: how will the rise of robot workers affect intergroup relations? Whereas some past research suggests that more robots will lead to more intergroup prejudice, we suggest that robots could also reduce prejudice by highlighting commonalities between all humans. As robot workers become more salient, intergroup differences—including racial and religious differences—may seem less important, fostering a perception of a common human identity (i.e., “panhumanism.”) Six studies (∑N= 3,312) support this hypothesis. Anxiety about the rising robot workforce predicts less anxiety about human out-groups (Study 1) and priming the salience of a robot workforce reduces prejudice towards out-groups (Study 2), makes people more accepting of out-group members as leaders and family members (Study 3), and increases wage equality across in-group and out-group members in an economic simulation (Study 4). This effect is mediated by panhumanism (Studies 5-6), suggesting that the perception of a common human in-group explains why robot salience reduces prejudice. We discuss why automation may sometimes exacerbate intergroup tensions and other-times reduce them.

From the General Discussion

An open question remains about when automation helps versus harms intergroup relations. Our evidence is optimistic, showing that robot workers can increase solidarity between human groups. Yet other studies are pessimistic, showing that reminders of rising automation can increase people’s perceived material insecurity, leading them to feel more threatened by immigrants and foreign workers (Im et al., in press Frey, Berger, & Chen, 2017), and data that we gathered across 37 nations—summarized in our supplemental materials—suggest that the countries that have automated the fastest over the last 42 years have also increased more in explicit prejudice towards out-groups, an effect that is partially explained by rising unemployment rates.


Recent studies have suggested artificial intelligence can develop sexist and racist tendencies

“There have to be some things that are respected, like the autonomy of people and their privacy,” says Delvaux.

This perhaps also highlights another issue troubling many dealing with artificial intelligence – the problem of bias. Machine learning systems are only as good as the data they are given to learn on, and recent studies have suggested artificial intelligence can develop sexist and racist tendencies.

Delvaux also points to the people who are writing the algorithms in the first place. The majority of people working in the technology industry are white males, with men making up between 70% and 90% of the employees at some of the biggest and most influential companies.

Silicon Valley has been rocked over the past couple of years with scandals about sex discrimination. It has raised fears that robots and machines could display similar discriminatory behaviour.

“It is a very thin slice of the population currently designing our technologies,” warns Judy Wajcman, a professor of sociology at the London School of Economics. “Technology needs to reflect society, so there needs to be a shift in the design and innovation process.”

Meanwhile, Bill Gates recently suggested yet another ethical red flag: that robots themselves may have to be taxed to make up for lost levies on income from employees. Others have suggested as robots take on more tasks, there could be a growing case for universal basic income, where everyone receives state benefits.

Much of this, of course, assumes that robots are actually capable of doing the jobs we set them. Despite their apparent intelligence, most robots are still pretty dumb contraptions when compared to our own capabilities.

Customers dine outside Eatsa, San Francisco's first fully automated restaurant in 2015 - food is picked up in ɼubbies,' no server, wait staff or cashier required. (Credit: Getty)

Machines have a ways to go

Like the Ikea example, AI leaves a lot of room for improvement.

Perhaps one of the greatest issues facing the machine learning and artificial intelligence community currently is understanding how their algorithms work. “Things like artificial intelligence and machine learning are still largely black boxes,” argues Manyika. “We can’t open them up to find out how they got the answer they produce.”

This presents a number of issues. Machine learning systems and modern AI are usually trained using large sets of images or data that are fed in to allow them to recognise patterns and trends. They can then use this to spot similar patterns when they are given new data.

This might be fine if we want to find CT scans that show signs of disease, for example, but if we use a similar system to identify a suspect from a fragment of CCTV footage, knowing how it did this may be crucial when presenting the evidence to a jury.

Even in the field of autonomous vehicles, this ability to generalise remains a considerable challenge.

Takeo Kanade, a professor of robotics at Carnegie Mellon University, is one of the pioneers of self-driving vehicles and an expert in computer vision. He says giving robots a “genuine understanding” of the world around them is still a technical challenge that needs to be overcome.

“It is not just about identifying where objects are,” he explains, following a lecture at the inaugural Kyoto Prize at Oxford event, where he outlined the problems facing researchers. “The technology has to be able to understand what the world is doing around them. For example, is that person actually going to cross the road in front of them, or not?”

Hawes himself encountered a similar problem with one of his own projects that put an autonomous “trainee office manager” into several offices in the UK and Austria.

The team programmed the robot, called Betty, to trundle around the offices monitoring for clutter building up, checking whether fire doors were closed, measuring noise and counting workers at their desks outside normal hours.

“Things would appear in the environment like chairs moving, people shifting their desks or pot plants,” he says. “Dealing with that without reprogramming the whole robot is challenging.”

But even though the robot wasn’t perfect, the humans still found a way of working alongside it.

Surprisingly, those working alongside Betty actually responded to their mechanical worker in a positive way, even coming to its aid if the robot ever got stuck in a corner. “People would say hello to it in the morning and said it made the office more interesting to work in,” says Hawes.

If we can hand the tedious, repetitive bits of our jobs to machines then it could free us up to some of the things we actually enjoy. “Work has the potential to become more interesting as a result,” says Frey.

It is a tantalising thought, that just perhaps, the rise of the machines could make our jobs a lot more human.


Dr. Jeff Hancock is founding director of the Stanford Social Media Lab and a professor in the Department of Communication at Stanford University. Hancock works on understanding psychological and interpersonal processes in social media. His research team specializes in using computational linguistics and experiments to understand how the words we use can reveal psychological and social dynamics, such as deception, trust, intimacy and social support. Hancock is well known for his research on how people use deception with technology from sending texts and emails to detecting fake online reviews.


A concept in psychology is helping AI to better navigate our world

The concept: When we look at a chair, regardless of its shape and color, we know that we can sit on it. When a fish is in water, regardless of its location, it knows that it can swim. This is known as the theory of affordance, a term coined by psychologist James J. Gibson. It states that when intelligent beings look at the world they perceive not simply objects and their relationships but also their possibilities. In other words, the chair “affords” the possibility of sitting. The water “affords” the possibility of swimming. The theory could explain in part why animal intelligence is so generalizable—we often immediately know how to engage with new objects because we recognize their affordances.

The idea: Researchers at DeepMind are now using this concept to develop a new approach to reinforcement learning. In typical reinforcement learning, an agent learns through trial and error, beginning with the assumption that any action is possible. A robot learning to move from point A to point B, for example, will assume that it can move through walls or furniture until repeated failures tell it otherwise. The idea is if the robot were instead first taught its environment’s affordances, it would immediately eliminate a significant fraction of the failed trials it would have to perform. This would make its learning process more efficient and help it generalize across different environments.

The experiments: The researchers set up a simple virtual scenario. They placed a virtual agent in a 2D environment with a wall down the middle and had the agent explore its range of motion until it had learned what the environment would allow it to do—its affordances. The researchers then gave the agent a set of simple objectives to achieve through reinforcement learning, such as moving a certain amount to the right or to the left. They found that, compared with an agent that hadn’t learned the affordances, it avoided any moves that would cause it to get blocked by the wall partway through its motion, setting it up to achieve its goal more efficiently.

Why it matters: The work is still in its early stages, so the researchers used only a simple environment and primitive objectives. But their hope is that their initial experiments will help lay a theoretical foundation for scaling the idea up to much more complex actions. In the future, they see this approach allowing a robot to quickly assess whether it can, say, pour liquid into a cup. Having developed a general understanding of which objects afford the possibility of holding liquid and which do not, it won’t have to repeatedly miss the cup and pour liquid all over the table to learn how to achieve its objective.


Watch the video: New bionics let us run, climb and dance. Hugh Herr (May 2022).


Comments:

  1. Malkree

    I protest against it.

  2. Verne

    I think you are wrong. We will consider.

  3. Dogor

    In this something is I seem this the excellent idea. I agree with you.

  4. Fenton

    One and the same, infinite

  5. Abdul

    Why are there so few comments on such a good posting? :)

  6. Brabei

    Many thanks for support how I can thank you?

  7. Jonas

    The ending is cool !!!!!!!!!!!!!!!!!



Write a message