A virtual assistant (VA) is a software agent that can perform a range of tasks or services for a user based on user input such as commands or questions, including verbal ones. Such technologies often incorporate chatbot capabilities to simulate human conversation, such as via online chat, to facilitate interaction with their users. The interaction may be via text, graphical interface, or voice - as some virtual assistants are able to interpret human speech and respond via synthesized voices.
In many cases users can ask their virtual assistants questions, control home automation devices and media playback, and manage other basic tasks such as email, to-do lists, and calendars - all with verbal commands.[1] In recent years, prominent virtual assistants for direct consumer use have included Apple's Siri, Amazon Alexa, Google Assistant, and Samsung's Bixby.[2] Also, companies in various industries often incorporate some kind of virtual assistant technology into their customer service or support.[3]
Recently, the emergence of recent artificial intelligence based chatbots, such as ChatGPT, has brought increased capability and interest to the field of virtual assistant products and services.[4][5]
History
Experimental decades: 1910s–1980s
Radio Rex was the first voice activated toy, patented in 1916[6] and released in 1922.[7] It was a wooden toy in the shape of a dog that would come out of its house when its name is called.
In 1952, Bell Labs presented "Audrey", the Automatic Digit Recognition machine. It occupied a six- foot-high relay rack, consumed substantial power, had streams of cables and exhibited the myriad maintenance problems associated with complex vacuum-tube circuitry. It could recognize the fundamental units of speech, phonemes. It was limited to accurate recognition of digits spoken by designated talkers. It could therefore be used for voice dialing, but in most cases push-button dialing was cheaper and faster, rather than speaking the consecutive digits.[8]
Another early tool which was enabled to perform digital speech recognition was the IBM Shoebox voice-activated calculator, presented to the general public during the 1962 Seattle World's Fair after its initial market launch in 1961. This early computer, developed almost 20 years before the introduction of the first IBM Personal Computer in 1981, was able to recognize 16 spoken words and the digits 0 to 9.
The first natural language processing computer program or the chatbot ELIZA was developed by MIT professor Joseph Weizenbaum in the 1960s. It was created to "demonstrate that the communication between man and machine was superficial".[9] ELIZA used pattern matching and substitution methodology into scripted responses to simulate conversation, which gave an illusion of understanding on the part of the program.
Weizenbaum's own secretary reportedly asked Weizenbaum to leave the room so that she and ELIZA could have a real conversation. Weizenbaum was surprised by this, later writing: "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.[10]
This gave name to the ELIZA effect, the tendency to unconsciously assume computer behaviors are analogous to human behaviors; that is, anthropomorphisation, a phenomenon present in human interactions with virtual assistants.
The next milestone in the development of voice recognition technology was achieved in the 1970s at the Carnegie Mellon University in Pittsburgh, Pennsylvania with substantial support of the United States Department of Defense and its DARPA agency, funded five years of a Speech Understanding Research program, aiming to reach a minimum vocabulary of 1,000 words. Companies and academia including IBM, Carnegie Mellon University (CMU) and Stanford Research Institute took part in the program.
The result was "Harpy", it mastered about 1000 words, the vocabulary of a three-year-old and it could understand sentences. It could process speech that followed pre-programmed vocabulary, pronunciation, and grammar structures to determine which sequences of words made sense together, and thus reducing speech recognition errors.
In 1986 Tangora was an upgrade of the Shoebox, it was a voice recognizing typewriter. Named after the world's fastest typist at the time, it had a vocabulary of 20,000 words and used prediction to decide the most likely result based on what was said in the past. IBM's approach was based on a hidden Markov model, which adds statistics to digital signal processing techniques. The method makes it possible to predict the most likely phonemes to follow a given phoneme. Still each speaker had to individually train the typewriter to recognize his or her voice, and pause between each word.
Birth of smart virtual assistants: 1990s–2010s
In the 1990s, digital speech recognition technology became a feature of the personal computer with IBM, Philips and Lernout & Hauspie fighting for customers. Much later the market launch of the first smartphone IBM Simon in 1994 laid the foundation for smart virtual assistants as we know them today.
In 1997, Dragon's Naturally Speaking software could recognize and transcribe natural human speech without pauses between each word into a document at a rate of 100 words per minute. A version of Naturally Speaking is still available for download and it is still used today, for instance, by many doctors in the US and the UK to document their medical records.
In 2001 Colloquis publicly launched SmarterChild, on platforms like AIM and MSN Messenger. While entirely text-based SmarterChild was able to play games, check the weather, look up facts, and converse with users to an extent.[11]
The first modern digital virtual assistant installed on a smartphone was Siri, which was introduced as a feature of the iPhone 4S on 4 October 2011.[12] Apple Inc. developed Siri following the 2010 acquisition of Siri Inc., a spin-off of SRI International, which is a research institute financed by DARPA and the United States Department of Defense.[13] Its aim was to aid in tasks such as sending a text message, making phone calls, checking the weather or setting up an alarm. Over time, it has developed to provide restaurant recommendations, search the internet, and provide driving directions.
In November 2014, Amazon announced Alexa alongside the Echo.[14]
In April 2017 Amazon released a service for building conversational interfaces for any type of virtual assistant or interface.
Artificial intelligence and language models: 2020s-present
In the 2020s, artificial intelligence (AI) systems like ChatGPT have gained popularity for their ability to generate human-like responses to text-based conversations. In February 2020, Microsoft introduced its Turing Natural Language Generation (T-NLG), which was then the "largest language model ever published at 17 billion parameters."[15] On November 30, 2022, ChatGPT was launched as a prototype and quickly garnered attention for its detailed responses and articulate answers across many domains of knowledge. The advent of ChatGPT and its introduction to the wider public increased interest and competition in the space. In February 2023, Google began introducing an experimental service called "Bard" which is based on its LaMDA program to generate text responses to questions asked based on information gathered from the web.
While ChatGPT and other generalized chatbots based on the latest generative AI are capable of performing various tasks associated with virtual assistants, there are also more specialized forms of such technology that are designed to target more specific situations or needs.[16][4]
Method of interaction
Virtual assistants work via:
- Text, including: online chat (especially in an instant messaging application or other application ), SMS text, e-mail or other text-based communication channel, for example Conversica's intelligent virtual assistants for business.[17]
- Voice: for example with Amazon Alexa[18] on Amazon Echo devices, Siri on an iPhone, Google Assistant on Google-enabled Android devices, or Bixby on Samsung devices.
- Images: some assistants, such as Google Assistant (which includes Google Lens) and Bixby on the Samsung Galaxy series, have the added capability of performing image processing to recognize objects in images.
Many virtual assistants are accessible via multiple methods, offering versatility in how users can interact with them, whether through chat, voice commands, or other integrated technologies.
Virtual assistants use natural language processing (NLP) to match user text or voice input to executable commands. Some continually learn using artificial intelligence techniques including machine learning and ambient intelligence.
To activate a virtual assistant using the voice, a wake word might be used. This is a word or groups of words such as "Hey Siri", "OK Google" or "Hey Google", "Alexa", and "Hey Microsoft".[19] As virtual assistants become more popular, there are increasing legal risks involved.[20]: 815
Devices and objects where found
Virtual assistants may be integrated into many types of platforms or, like Amazon Alexa, across several of them:
- Into devices like smart speakers such as Amazon Echo, Google Home and Apple HomePod
- In instant messaging applications on both smartphones and via the Web, e.g. M (virtual assistant) on both Facebook and Facebook Messenger apps or via the Web
- Built into a mobile operating system (OS), as are Apple's Siri on iOS devices and BlackBerry Assistant on BlackBerry 10 devices, or into a desktop OS such as Cortana on Microsoft Windows OS
- Built into a smartphone independent of the OS, as is Bixby on the Samsung Galaxy S8 and Note 8.[21]
- Within instant messaging platforms, assistants from specific organizations, such as Aeromexico's Aerobot on Facebook Messenger or WeChat Secretary.
- Within mobile apps from specific companies and other organizations, such as Dom from Domino's Pizza[22]
- In appliances,[23] cars,[24] and wearable technology.[25]
- Previous generations of virtual assistants often worked on websites, such as Alaska Airlines' Ask Jenn,[26] or on interactive voice response (IVR) systems such as American Airlines' IVR by Nuance.[27]
Services
Virtual assistants can provide a wide variety of services. These include:[28]
- Provide information such as weather, facts from e.g. Wikipedia or IMDb, set an alarm, make to-do lists and shopping lists
- Play music from streaming services such as Spotify and Pandora; play radio stations; read audiobooks
- Play videos, TV shows or movies on televisions, streaming from e.g. Netflix
- Conversational commerce (see below)
- Assist public interactions with government (see Artificial intelligence in government)
- Complement and/or replace human customer service specialists[29] in domains like healthcare, sales, and banking. One report estimated that an automated online assistant produced a 30% decrease in the work-load for a human-provided call centre.[30]
- Enhance the driving experience by enabling interaction with virtual assistants like Siri and Alexa while in the car.
Conversational commerce
Conversational commerce is e-commerce via various means of messaging, including via voice assistants[31] but also live chat on e-commerce Web sites, live chat on messaging applications such as WeChat, Facebook Messenger and WhatsApp[32] and chatbots on messaging applications or Web sites.
Customer support
A virtual assistant can work with customer support team of a business to provide 24x7 support to customers. It provides quick responses, which enhances a customer's experience.
Third-party services
Amazon enables Alexa "Skills" and Google "Actions", essentially applications that run on the assistant platforms.
Virtual assistant privacy
Virtual assistants have a variety of privacy concerns associated with them. Features such as activation by voice pose a threat, as such features requires the device to always be listening.[33] Modes of privacy such as the virtual security button have been proposed to create a multilayer authentication for virtual assistants.[34]
Privacy policy of prominent virtual assistants
Google Assistant
The privacy policy of Google Assistant states that it does not store the audio data without the user's permission, but may store the conversation transcripts to personalise its experience. Personalisation can be turned off in settings. If a user wants Google Assistant to store audio data, they can go to Voice & Audio Activity (VAA) and turn on this feature. Audio files are sent to the cloud and used by Google to improve the performance of Google Assistant, but only if the VAA feature is turned on.[35]
Amazon's Alexa
The privacy policy of Amazon's virtual assistant, Alexa, states that it only listens to conversations when its wake word (like Alexa, Amazon, Echo) is used. It starts recording the conversation after the call of a wake word, and stops recording after 8 seconds of silence. It sends the recorded conversation to the cloud. It is possible to delete the recording from the cloud by visiting 'Alexa Privacy' in 'Alexa'.[36]
Apple's Siri
Apple states that it does not record audio to improve Siri. Instead, it uses transcripts. Transcript data is only sent if it is deemed important for analysis. Users can opt out anytime if they don't want Siri to send the transcripts in the cloud.[37]
Presumed and observed interest for the consumer
Presumed added value as allowing a new way of interactions
Added value of the virtual assistants can come among others from the following:
- Voice communication can sometimes represent the optimal man-machine communication:
- It is convenient: there are some sectors where voice is the only way of possible communication, and more generally, it allows to free-up both hands and vision potentially for doing another activity in parallel, or helps also disabled people.
- It is faster: Voice is more efficient than writing on a keyboard: we can speak up to 200 words per minute opposed to 60 in case of writing on a keyboard. It is also more natural thus requiring less effort (reading a text however can reach 700 words per minute).[38]
- Virtual assistants save a lot of time by automation: they can take appointments, or read the news while the consumer does something else. It is also possible to ask the virtual assistant to schedule meetings, hence helping to organize time. The designers of new digital schedulers explained the ambition they had that these calendars schedule lives to make the consumer use his time more efficiently, through machine learning processes, and complete organization of work time and free time. As an example when the consumer expresses the desire of scheduling a break, the VA will schedule it at an optimal moment for this purpose (for example at a time of the week where they are less productive), with the additional long-term objective of being able to schedule and organize the free time of the consumer, to assure them optimal work efficiency.[39]
Perceived interest
- According to a recent study (2019), the two reasons for using virtual assistants for consumers are perceived usefulness and perceived enjoyment. The first result of this study is that both perceived usefulness and perceived enjoyment have an equivalent very strong influence for the consumer willingness to use a virtual assistant.
- The second result of this study is that:
- Provided content quality has a very strong influence on perceived usefulness and a strong influence on perceived enjoyment.
- Visual attractiveness has a very strong influence on perceived enjoyment.
- Automation has a strong influence on perceived usefulness.[40]
Controversies
Artificial intelligence controversies
- Virtual assistants spur the filter bubble: As for social media, virtual assistants' algorithms are trained to show pertinent data and discard others based on previous activities of the consumer: The pertinent data is the one which will interest or please the consumer. As a result, they become isolated from data that disagrees with their viewpoints, effectively isolating them into their own intellectual bubble, and reinforcing their opinions. This phenomenon was known to reinforce fake news and echo chambers.[41]
- Virtual assistants are also sometimes criticized for being overrated. In particular, A. Casilli points out that the AI of virtual assistants are neither intelligent nor artificial for two reasons:
- Not intelligent because all they do is being the assistant of the human, and only by doing tasks that a human could do easily, and in a very limited specter of actions: find, class, and present information, offers or documents. Also, virtual assistants are neither able to make decisions on their own nor to anticipate things.
- And not artificial because they would be impossible without human labelization through micro working.[42]
Ethics implications
In 2019 Antonio A. Casilli, a French sociologist, criticized artificial intelligence and virtual assistants in particular in the following way:
At a first level the fact that the consumer provides free data for the training and improvement of the virtual assistant, often without knowing it, is ethically disturbing.
But at a second level, it might be even more ethically disturbing to know how these AIs are trained with this data.
This artificial intelligence is trained via neural networks, which require a huge amount of labelled data. However, this data needs to be labelled through a human process, which explains the rise of microwork in the last decade. That is, remotely using some people worldwide doing some repetitive and very simple tasks for a few cents, such as listening to virtual assistant speech data, and writing down what was said. Microwork has been criticized for the job insecurity it causes, and for the total lack of regulation: The average salary was 1,38 dollar/hour in 2010,[43] and it provides neither healthcare nor retirement benefits, sick pay, minimum wage. Hence, virtual assistants and their designers are controversial for spurring job insecurity, and the AIs they propose are still human in the way that they would be impossible without the microwork of millions of human workers.[42]
Privacy concerns are raised by the fact that voice commands are available to the providers of virtual assistants in unencrypted form, and can thus be shared with third parties and be processed in an unauthorized or unexpected manner.[44] Additionally to the linguistic content of recorded speech, a user's manner of expression and voice characteristics can implicitly contain information about his or her biometric identity, personality traits, body shape, physical and mental health condition, sex, gender, moods and emotions, socioeconomic status and geographical origin.[45]
Developer platforms
Notable developer platforms for virtual assistants include:
- Amazon Lex was opened to developers in April 2017. It involves natural language understanding technology combined with automatic speech recognition and had been introduced in November 2016.[46]
- Google provides the Actions on Google and Dialogflow platforms for developers to create "Actions" for Google Assistant[47]
- Apple provides SiriKit for developers to create extensions for Siri
- IBM's Watson, while sometimes spoken of as a virtual assistant is in fact an entire artificial intelligence platform and community powering some virtual assistants, chatbots. and many other types of solutions.[48][49]
Previous generations
In previous generations of text chat-based virtual assistants, the assistant was often represented by an avatar (a.k.a. interactive online character or automated character) — this was known as an embodied agent.
Economic relevance
For individuals
Digital experiences enabled by virtual assistants are considered to be among the major recent technological advances and most promising consumer trends. Experts claim that digital experiences will achieve a status-weight comparable to 'real' experiences, if not become more sought-after and prized.[50] The trend is verified by a high number of frequent users and the substantial growth of worldwide user numbers of virtual digital assistants. In mid-2017, the number of frequent users of digital virtual assistants is estimated to be around 1 bn worldwide.[51] In addition, it can be observed that virtual digital assistant technology is no longer restricted to smartphone applications, but present across many industry sectors (incl. automotive, telecommunications, retail, healthcare and education).[52] In response to the significant R&D expenses of firms across all sectors and an increasing implementation of mobile devices, the market for speech recognition technology is predicted to grow at a CAGR of 34.9% globally over the period of 2016 to 2024 and thereby surpass a global market size of US$7.5 billion by 2024.[52] According to an Ovum study, the "native digital assistant installed base" is projected to exceed the world's population by 2021, with 7.5 billion active voice AI–capable devices.[53] According to Ovum, by that time "Google Assistant will dominate the voice AI–capable device market with 23.3% market share, followed by Samsung's Bixby (14.5%), Apple's Siri (13.1%), Amazon's Alexa (3.9%), and Microsoft's Cortana (2.3%)."[53]
Taking into consideration the regional distribution of market leaders, North American companies (e.g. Nuance Communications, IBM, eGain) are expected to dominate the industry over the next years, due to the significant impact of BYOD (Bring Your Own Device) and enterprise mobility business models. Furthermore, the increasing demand for smartphone-assisted platforms are expected to further boost the North American intelligent virtual assistant (IVA) industry growth. Despite its smaller size in comparison to the North American market, the intelligent virtual assistant industry from the Asia-Pacific region, with its main players located in India and China is predicted to grow at an annual growth rate of 40% (above global average) over the 2016–2024 period.[52]
Economic opportunity for enterprises
Virtual assistants should not be only seen as a gadget for individuals, as they could have a real economic utility for enterprises. As an example, a virtual assistant can take the role of an always available assistant with an encyclopedic knowledge. And which can organize meetings, check inventories, verify informations. Virtual assistants are all the more important that their integration in small and middle-sized enterprises often consists in an easy first step through the more global adaptation and use of Internet of Things (IoT). Indeed, IoT technologies are first perceived by small and medium-sized enterprises as technologies of critical importance, but too complicated, risky or costly to be used.[54]
Security
In May 2018, researchers from the University of California, Berkeley, published a paper that showed audio commands undetectable for the human ear could be directly embedded into music or spoken text, thereby manipulating virtual assistants into performing certain actions without the user taking note of it.[55] The researchers made small changes to audio files, which cancelled out the sound patterns that speech recognition systems are meant to detect. These were replaced with sounds that would be interpreted differently by the system and command it to dial phone numbers, open websites or even transfer money.[55] The possibility of this has been known since 2016,[55] and affects devices from Apple, Amazon and Google.[56]
In addition to unintentional actions and voice recording, another security and privacy risk associated with intelligent virtual assistants is malicious voice commands: An attacker who impersonates a user and issues malicious voice commands to, for example, unlock a smart door to gain unauthorized entry to a home or garage or order items online without the user's knowledge. Although some IVAs provide a voice-training feature to prevent such impersonation, it can be difficult for the system to distinguish between similar voices. Thus, a malicious person who is able to access an IVA-enabled device might be able to fool the system into thinking that they are the real owner and carry out criminal or mischievous acts.[57]
Comparison of notable assistants
Intelligent personal assistant | Developer | Free software | Free and open-source hardware | HDMI out | External I/O | IOT | Chromecast integration | Smart phone app | Always on | Unit to unit voice channel | Skill language |
---|---|---|---|---|---|---|---|---|---|---|---|
Sydney | Bing.com | No | No | No | No | Yes | No | Yes | Yes | ? | |
Alexa (a.k.a. Echo) | Amazon.com | No | No | No | No | Yes | No | Yes | Yes | ? | JavaScript |
Alice | Yandex | No | — | — | — | Yes | No | Yes | Yes | — | ? |
AliGenie | Alibaba Group | No | No | — | — | Yes | No | Yes | Yes | — | ? |
Assistant | Speaktoit | No | — | — | — | No | No | Yes | No | — | ? |
Bixby | Samsung Electronics | No | — | — | — | No | No | Yes | — | — | JavaScript |
BlackBerry Assistant | BlackBerry Limited | No | — | — | — | No | No | Yes | No | — | ? |
Braina | Brainasoft | No | — | — | — | No | No | Yes | No | — | ? |
Clova | Naver Corporation | No | — | — | — | Yes | No | Yes | Yes | — | ? |
Cortana | Microsoft | No | — | — | — | Yes | No | Yes | Yes | — | ? |
Duer | Baidu[58] | ||||||||||
Evi | Amazon.com and True Knowledge | No | — | — | — | No | No | Yes | No | — | ? |
Google Assistant | No | — | — | — | Yes | Yes | Yes | Yes | — | C++ | |
Google Now | No | — | — | — | Yes | Yes | Yes | Yes | — | ? | |
Mycroft[59] | Mycroft AI | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Python |
SILVIA | Cognitive Code | No | — | — | — | No | No | Yes | No | — | ? |
Siri | Apple Inc. | No | No | — | — | Yes | No | Yes | Yes | — | ? |
Viv | Samsung Electronics | No | — | — | — | Yes | No | Yes | No | — | ? |
Xiaowei | Tencent | ? | |||||||||
Celia | Huawei | No | No | — | — | Yes | No | Yes | Yes | — | ? |
See also
- Applications of artificial intelligence
- Autonomous agent
- Computer facial animation
- Expert system
- Friendly artificial intelligence
- Home network
- Hybrid intelligent system
- Intelligent agent
- Interactions Corporation
- Knowledge Navigator
- Office Assistant
- Multi-agent system
- Simulation hypothesis
- Social bot
- Social data revolution
- Software bot
- Wizard (software)
References
- ↑ Hoy, Matthew B. (2018). "Alexa, Siri, Cortana, and More: An Introduction to Voice Assistants". Medical Reference Services Quarterly. 37 (1): 81–88. doi:10.1080/02763869.2018.1404391. PMID 29327988. S2CID 30809087.
- ↑ "Siri vs Alexa vs Google Assistant vs Bixby: Which one reigns supreme?". Android Authority.
- ↑ "The Magic of Virtual Assistants and Their Impact on Customer Service".
- 1 2 "The One Thing You Should Definitely be Using AI Chatbot for". 7 April 2023.
- ↑ "A.I. Means everyone gets a 'white-collar' personal assistant, Bill Gates says".
- ↑ US 1209636, Christian Berger, "Sound-Operated Circuit Controller", issued 1916-12-19, assigned to Submarine Wireless Company
- ↑ Markowitz, Judith. "Toys That Have a Voice". SpeechTechMag.
- ↑ Moskvitch, Katia. "The machines that learned to listen". BBC. Retrieved 5 May 2020.
- ↑ Epstein, J; Klinkenberg, W. D (1 May 2001). "From Eliza to Internet: a brief history of computerized assessment". Computers in Human Behavior. 17 (3): 295–314. doi:10.1016/S0747-5632(01)00004-8. ISSN 0747-5632.
- ↑ Weizenbaum, Joseph (1976). Computer power and human reason : from judgment to calculation. Oliver Wendell Holmes Library Phillips Academy. San Francisco : W. H. Freeman.
- ↑ "Smartphone: your new personal assistant – Orange Pop". 10 July 2017. Archived from the original on 10 July 2017. Retrieved 5 May 2020.
- ↑ Murph, Darren (4 October 2011). "iPhone 4S hands-on!". Engadget.com. Retrieved 10 December 2017.
- ↑ "Feature: Von IBM Shoebox bis Siri: 50 Jahre Spracherkennung – WELT" [From IBM Shoebox to Siri: 50 years of speech recognition]. Die Welt (in German). Welt.de. 20 April 2012. Retrieved 10 December 2017.
- ↑ Kundu, Kishalaya (2023). "Amazon expands Echo lineup with new smart speaker, earbuds, and more". XDA. Retrieved 26 May 2023.
- ↑ Sterling, Bruce (13 February 2020). "Web Semantics: Microsoft Project Turing introduces Turing Natural Language Generation (T-NLG)". Wired. ISSN 1059-1028. Retrieved 31 July 2020.
- ↑ "GPT-4 takes the world by storm - List of companies that integrated the chatbot". 21 March 2023.
- ↑ "Conversica Raises $31 Million in Series C Funding to Fuel Expansion of Conversational AI for Business". Bloomberg.com. 30 October 2018. Retrieved 23 October 2020.
- ↑ Herrera, Sebastian (26 September 2019). "Amazon Extends Alexa's Reach Into Wearables". The Wall Street Journal. Retrieved 26 September 2019.
- ↑ "S7617 – Developing Your Own Wake Word Engine Just Like 'Alexa' and 'OK Google'". GPU Technology Conference. Retrieved 17 July 2017.
- ↑ Van Loo, Rory (1 March 2019). "Digital Market Perfection". Michigan Law Review. 117 (5): 815. doi:10.36644/mlr.117.5.digital. S2CID 86402702.
- ↑ La, Lynn (27 February 2017). "Everything Google Assistant can do on the Pixel". CNET. Retrieved 10 December 2017.
- ↑ Morrison, Maureen (5 October 2014). "Domino's Pitches Voice-Ordering App in Fast-Food First | CMO Strategy". AdAge. Retrieved 10 December 2017.
- ↑ O'Shea, Dan (4 January 2017). "LG introduces smart refrigerator with Amazon Alexa-enabled grocery ordering". Retail Dive. Retrieved 10 December 2017.
- ↑ Gibbs, Samuel (7 February 2017). "Amazon's Alexa escapes the Echo and gets into cars | Technology". The Guardian. Retrieved 10 December 2017.
- ↑ "What is Google Assistant, how does it work, and which devices offer it?". Pocket-lint. 6 October 2017. Retrieved 10 December 2017.
- ↑ "'Ask Jenn', Alaska Airlines website". Alaska Airlines. 2 January 2017. Retrieved 10 December 2017.
- ↑ AT&T Tech Channel (26 June 2013). "American Airlines (US Airways) – First US Airline to Deploy Natural Language Speech" (video), Nuance Enterprise on YouTube. Archived from the original on 21 December 2021. Retrieved 10 December 2017 – via YouTube.
YouTube title: Airline Information System, 1989 – AT&T Archives – speech recognition
- ↑ Martin, Taylor; Priest, David (10 September 2017). "The complete list of Alexa commands so far". CNET. Retrieved 10 December 2017.
- ↑ Kongthon, Alisa; Sangkeettrakarn, Chatchawal; Kongyoung, Sarawoot; Haruechaiyasak, Choochart (1 January 2009). "Implementing an online help desk system based on conversational agent". Proceedings of the International Conference on Management of Emergent Digital EcoSystems. MEDES '09. New York, NY, USA: ACM. pp. 69:450–69:451. doi:10.1145/1643823.1643908. ISBN 9781605588292. S2CID 1046438.
- ↑ O'Donnell, Anthony (3 June 2010). "Aetna's new "virtual online assistant"". Insurance & Technology. Archived from the original on 7 June 2010.
- ↑ "How to prepare your products and brand for conversational commerce". 6 March 2018.
- ↑ Taylor, Glenn (5 March 2018). "Retail's Big Opportunity: 87% Of U.S. Consumers Grasp The Power Of Conversational Commerce – Retail TouchPoints".
- ↑ Zhang, Guoming; Yan, Chen; Ji, Xiaoyu; Zhang, Tianchen; Zhang, Taimin; Xu, Wenyuan (2017). "DolphinAttack". Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security - CCS '17. pp. 103–117. arXiv:1708.09537. doi:10.1145/3133956.3134052. ISBN 9781450349468. S2CID 2419970.
- ↑ Lei, Xinyu; Tu, Guan-Hua; Liu, Alex X.; Li, Chi-Yu; Xie, Tian (2017). "The Insecurity of Home Digital Voice Assistants – Amazon Alexa as a Case Study". arXiv:1712.03327 [cs.CR].
- ↑ "Doing more to protect your privacy with the Assistant". Google. 23 September 2019. Retrieved 27 February 2020.
- ↑ "Alexa, Echo Devices, and Your Privacy". Amazon.com. Retrieved 27 February 2020.
- ↑ "Improving Siri's privacy protections". Apple Newsroom. Retrieved 27 February 2020.
- ↑ Minker, W.; Néel, F. (2002). "Développement des technologies vocales". Le Travail Humain. 65 (3): 261. doi:10.3917/th.653.0261. ISSN 0041-1868.
- ↑ Wajcman, Judy (2019). "The Digital Architecture of time Management" (PDF). Science, Technology, & Human Values. 44 (2): 315–337. doi:10.1177/0162243918795041. S2CID 149648777.
- ↑ Yang, Heetae; Lee, Hwansoo (26 June 2018). "Understanding user behavior of virtual personal assistant devices". Information Systems and E-Business Management. 17 (1): 65–87. doi:10.1007/s10257-018-0375-1. ISSN 1617-9846. S2CID 56838915.
- ↑ Tisseron, Serge (2019). "La famille sous écoute". L'École des Parents. n° 632 (3): 16. doi:10.3917/epar.632.0016. ISSN 0424-2238. S2CID 199344092.
- 1 2 Casilli, Antonio A. (2019). En attendant les robots. Enquête sur le travail du clic. Editions Seuil. ISBN 978-2-02-140188-2. OCLC 1083583353.
- ↑ Horton, John Joseph; Chilton, Lydia B. (2010). "The labor economics of paid crowdsourcing". Proceedings of the 11th ACM conference on Electronic commerce. EC '10. New York, New York, USA: ACM Press. pp. 209–218. arXiv:1001.0627. doi:10.1145/1807342.1807376. ISBN 978-1-60558-822-3. S2CID 18237602.
- ↑ "Apple, Google, and Amazon May Have Violated Your Privacy by Reviewing Digital Assistant Commands". Fortune. 5 August 2019. Retrieved 13 May 2020.
- ↑ Kröger, Jacob Leon; Lutz, Otto Hans-Martin; Raschke, Philip (2020). "Privacy Implications of Voice and Speech Analysis – Information Disclosure by Inference". Privacy and Identity Management. Data for Better Living: AI and Privacy. IFIP Advances in Information and Communication Technology. Vol. 576. pp. 242–258. doi:10.1007/978-3-030-42504-3_16. ISBN 978-3-030-42503-6. ISSN 1868-4238.
- ↑ "Amazon Lex, the technology behind Alexa, opens up to developers". TechCrunch. 20 April 2017. Retrieved 10 December 2017.
- ↑ "Actions on Google | Google Developers". Retrieved 10 December 2017.
- ↑ "Watson – Stories of how AI and Watson are transforming business and our world". Ibm.com. Retrieved 10 December 2017.
- ↑ Memeti, Suejb; Pllana, Sabri (January 2018). "PAPA: A parallel programming assistant powered by IBM Watson cognitive computing technology". Journal of Computational Science. 26: 275–284. doi:10.1016/j.jocs.2018.01.001.
- ↑ "5 Consumer Trends for 2017". TrendWatching. 31 October 2016. Retrieved 10 December 2017.
- ↑ Richter, Felix (26 August 2016). "Chart: Digital Assistants – Always at Your Service". Statista. Retrieved 10 December 2017.
- 1 2 3 "Virtual Assistant Industry Statistics". Global Market Insights. 30 January 2017. Retrieved 10 December 2017.
- 1 2 "Virtual digital assistants to overtake world population by 2021". ovum.informa.com. Retrieved 11 May 2018.
- ↑ Jones, Nory B.; Graham, C. Matt (February 2018). "Can the IoT Help Small Businesses?". Bulletin of Science, Technology & Society. 38 (1–2): 3–12. doi:10.1177/0270467620902365. ISSN 0270-4676. S2CID 214031256.
- 1 2 3 "Alexa and Siri Can Hear This Hidden Command. You Can't". The New York Times. 10 May 2018. ISSN 0362-4331. Retrieved 11 May 2018.
- ↑ "As voice assistants go mainstream, researchers warn of vulnerabilities". CNET. 10 May 2018. Retrieved 11 May 2018.
- ↑ Chung, H.; Iorga, M.; Voas, J.; Lee, S. (2017). "Alexa, Can I Trust You?". Computer. 50 (9): 100–104. doi:10.1109/MC.2017.3571053. ISSN 0018-9162. PMC 5714311. PMID 29213147.
- ↑ "Baidu unveils 3 smart speakers with its Duer digital assistant". 8 January 2018.
- ↑ MSV, Janakiram (20 August 2015). "Meet Mycroft, The Open Source Alternative To Amazon Echo". Forbes. Retrieved 27 October 2016.