Saturday, May 7, 2016

8 things Alexa can't yet do

8 things Alexa can't yet do
More useful features are in the speaker's future, thanks to third-party add-ons called Skills and a growing list of native integrations that Amazon has added.

That said, there is still plenty Alexa can't do -- things that, at times, seem like no-brainers.

Custom alarms

It only makes sense that a connected speaker like Alexa would come equipped with some killer alarm features. However, that isn't the case.

Until recently, if you wanted an alarm to wake you on a daily basis, you would have to tell Alexa when you wanted an alarm to sound each and every day. Recurring alarms weren't possible until just over a month ago. Now you can edit existing alarms in the Alexa app to recur weekly, on weekdays or on weekends. Or you can create new recurring alarms by speaking, "Alexa, set an alarm for 7:00 a.m. every weekday."

Seeing as Alexa's primary function is being a speaker, you would imagine some of its music features would be available as alarm sounds. Sadly, that's not possible without a workaround that involves streaming the alarm sound from your mobile device via Bluetooth.

Actions on IFTTT

Thanks to IFTTT integration, Alexa now works with dozens of smart-home devices, can be used to create tasks in Todoist, or to trigger multiple recipes with a single phrase.

The shortfall of Alexa's IFTTT integration is the lack of any actions whatsoever. Alexa can only be used as a trigger in IFTTT. That means I can't, for example, complete a task in Todoist and have Alexa play a song. Or as editor Ry Crist suggested, you can't have Alexa play sound bites, such as a dog barking, when motion is detected or a door opens in the middle of the night.

String commands

If you want Alexa to do more than one thing, you can't tell it everything you want it to do at once. For instance, saying, "Alexa turn on the lights, play the Evening Chill playlist on Spotify, and turn the temperature up," won't work as intended.

Currently, you must divide every command into its own statement:
  • "Alexa, turn on the lights."
  • "Alexa, play Evening Chill playlist on Spotify."
  • "Alexa, turn the temperature up."
The only workaround for chaining multiple actions to a single Alexa command is by creating several IFTTT recipes with the same trigger phrase or a Yonomi routine.

Notifications

Alexa offers virtually nothing in the way of notifications, audio nor visual. With the ability to read top headlines to you in your Flash Briefing, the next logical step -- privacy concerns aside -- would be to have Alexa read messages from your inbox or incoming text messages, or to speak the name of the person calling you.

Amazon package tracking

You can order certain products from Amazon using Alexa, such as eligible items you've purchased before or top Amazon products. You cannot, however, ask Alexa for a status update on your recent orders.

Instead, you'll have to rely on the Amazon app, your own package tracking app, or smart lights to do that for you.

Custom trigger names or voices

Alexa devices have only three words that will wake them: Alexa, Amazon or the name of the device (Echo, Dot or Tap). If you want a truly customized wake word, you're sadly out of luck.

And if you're anything but a native English speaker, or you aren't fond of a female voice for Alexa, there are currently no options for customizing either of those settings.

Swearing

If you use the "Simon says" command, Alexa will repeat anything you say. Even if you speak a number of expletives, Alexa will repeat your words, only the swearing will be bleeped out. Some people might wish she did otherwise.

 

Voice memos or send audio messages

It's no secret that Alexa is always listening. And every time you speak a command, that audio snippet is recorded, saved and transcribed. Every audio file Alexa records is saved to your Amazon account.
You can open the Alexa application and listen to the recordings of some of your recent commands. But possibly one of the most bizarre missing features is a voice-memo function. The least it could do is allow you to export your own voice commands from within the app, but that's not currently possible.

It would also be nice to leave voice memos for your significant other or roommate, or send audio snippets to other Alexa users. For example:
  • "Alexa, leave a message for Jack. [Wait for beep.] Don't forget to grab bread from the grocery store."
  • "Alexa, send a message to Alex. [Wait for beep.] Do you have plans for tonight?"
This of course, would require better friend or contact management from Alexa, as well as voice recognition, which brings me to my final point.

Friday, April 29, 2016

Musical training gives Stanford engineers a creative lift

Stanford engineers can get scholarships to study music and other arts.  | REUTERS/Jo Yong-Hak
The thank-you letters began arriving not long after Persis Drell became dean of Stanford Engineering. One or two at first, then more trickled in from graduate and undergraduate students alike. But they were not about algorithms or nanoparticles or systems. They were about music.

“These beautiful letters were from engineering students whose music lessons were paid for by the School of Engineering,” Drell says. “And they were just so wonderful and full of sincere joy. I was just very taken by them.”

The letters were inspired by the Engineers in the Arts Scholarship, which provides partial funding for music lessons for declared engineering students who apply. There is no need requirement but award levels for undergraduates may vary depending on their financial aid profile.

The program was started by former Dean James Gibbons, a jazz trombonist, who presided over Stanford Engineering from 1984 to 1996. It continued quietly over the many years until those thank-you notes delighted Drell, a physicist by training and cellist by avocation.

“I’m an amateur. I’ll never perform anywhere, but it enriches my life in incredibly important ways,” says Drell, who has discovered since becoming dean that many of her Stanford Engineering colleagues share her passion for music. For instance, Drell has played with David Miller (clarinet) and Tom Lee (violin), both members of the electrical engineering faculty. Computer programming professor emeritus Don Knuth regularly plays the pipe organ he has installed at his home. Chemical engineering professor emeritus Channing Robertson is known to coax melodies from the piano. And the list goes on

Nor is the symbiosis between music and engineering merely a Stanford phenomenon. At universities and corporations across the world, countless engineers are dedicated musical amateurs, and many bands and orchestras feature professionals who studied engineering. For instance, Microsoft co-founder Paul Allen, benefactor of the engineering building that bears his name, is a well-known lover of guitar.

“A program like Engineers in the Arts is the perfect exemplification of what being a Stanford engineer is all about,” Drell says. “Despite perception as single-minded techies, we are actually people of broad interests and talents and we bring those interests into our work as engineers. I think music actually makes us better engineers.”
Prelude to a program

The Engineers in the Arts Scholarship traces its roots to Gibbons, who grew up in Texarkana, Texas, “probably best known in music circles as the home of Scott Joplin,” the ragtime composer and pianist, he says with a smile. He recalls learning how to play trombone from a book he was given by his older brother, himself a trombonist in the U.S. Navy Band during World War II. Gibbons played in his high school band and at dance clubs, but when it came time for college, he was offered a fateful choice: Take a full ride music scholarship at Louisiana State University, or study engineering at Northwestern University in Chicago.

Engineering won, but even at Northwestern, Gibbons earned money by playing clubs around Chicago. Eventually the opportunities of solid-state electronics triumphed and music became a pleasurable hobby. In the late 1950s Gibbons, who received his PhD at Stanford, returned to campus to work part time at the pioneering firm Shockley Semiconductor and also create a lab where faculty and graduate students could do research in semiconductor physics and technology.

But music continued to enliven his teaching. “For many years, as my undergraduates drifted into class, I would play from the composer or performer whose birthday was closest to the date of the lecture,” he recalls. “I started class with a few remarks about the artist and students loved it.”

Not long after Gibbons became dean, the Music Department sought his help. Several of the “first chair” or lead players in the Symphony and Wind Ensemble were engineers, and the Music Department wanted all of their first chair players to take lessons. Gibbons decided to raise the money for this specialized instruction but took the whole idea one step further by making arts training available to any Stanford engineering student who wanted it.

“We called it Engineers in the Arts because it wasn’t just music lessons, but included studio arts, too,” he remembers.

For a benefactor, Gibbons turned to Peter Bing, who, with his wife, would later endow Bing Concert Hall on the Stanford campus. Soon, a $25,000 fund was in place; half funded by the Dean of the School of Engineering, the other half by Bing.

The program, now in its 31st year, endures to this day. That explains why so many students write to Drell — as they had to Gibbons and to former Deans John Hennessy and Jim Plummer before her — with heartfelt gratitude for encouraging and furthering their musical studies.
Scratching the itch

Daniel Borup is among the students who have taken advantage of the program. A doctoral candidate in mechanical engineering, Borup uses his scholarship to take classical voice lessons. He likes that music and engineering are both very systematic. He finds the notes of the musical scale to be a sort of engineering parameter.

“Like the laws of physics and thermodynamics to the engineer, as a musician you have to work with the boundaries of the musical notes to create something new,” Borup says, adding of his love of music: “I will always sing and I will always be grateful to the deans for allowing me to continue to develop as a singer through Engineers in the Arts.”

Zach Yellin-Flaherty is another beneficiary of the program. A recently graduated computer science co-term, he came to Stanford having already learned to play piano and used the program to shift his focus to the guitar.

“I find coding and music to be different aspects of the same thing. It’s all about creativity,” he says. “Music might not be quite as logical as engineering, but both involve making connections and tying things together in interesting and inventive ways. They scratch the same itch.”

Yellin-Flaherty took guitar lessons from Rick Vandivier, a lecturer in jazz guitar at the Department of Music who studied at the Berklee School of Music, in Boston. In a career that has spanned 30 years, Vandivier has played with notables such as David Grisman, Mose Allison and Dr. Lonnie Smith, as well as with the San José Symphony Orchestra and the American Musical Theatre of San Jose.

“It was great to take lessons from someone of Rick’s caliber while I completed my education,” Yellin-Flaherty says, adding: “I hope to continue to play to grow as a musician while building a career as an engineer, but it will take balance.”

Thursday, March 17, 2016

Wireless tech means safer drones, smarter homes and password-free WiFi


We’ve all been there, impatiently twiddling our thumbs while trying to locate a WiFi signal. But what if, instead, the WiFi could locate us?

According to researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), it could mean safer drones, smarter homes, and password-free WiFi.

In a new paper, a research team led by Professor Dina Katabi present a system called Chronos that enables a single WiFi access point to locate users to within tens of centimeters, without any external sensors.

A new wireless technology developed by MIT researchers could mean safer drones, smarter homes, and password-free WiFi. The team developed a system that enables a single WiFi access point to locate users to within tens of centimeters, without any external sensors. They demonstrated the system in an apartment and a cafe, while also showing off a drone that maintains a safe distance from its user with a margin of error of about 4 centimeters.

Video: MIT Computer Science and Artificial Intelligence Laboratory

The group demonstrated Chronos in an apartment and a cafe, while also showing off a drone that maintains a safe distance from its user with a margin of error of about four centimeters.

“From developing drones that are safer for people to be around, to tracking where family members are in your house, Chronos could open up new avenues for using WiFi in robotics, home automation and more,” says PhD student Deepak Vasisht, who is first author on the paper alongside Katabi and former PhD student Swarun Kumar, who is now an assistant professor at Carnegie Mellon University. “Designing a system that enables one WiFi node to locate another is an important step for wireless technology.”

Experiments conducted in a two-bedroom apartment with four occupants show that Chronos can correctly identify which room a resident is in 94 percent of the time. For the cafe demo, the system was 97 percent accurate in distinguishing in-store customers from out-of-store intruders - meaning it could be used by small businesses to prevent non-customers from stealing their WiFi. (32 percent of Americans have copped to this cyber-crime.)

Chronos locates users by calculating the “time-of-flight” that it takes for data to travel from the user to an access point. The system is 20 times more accurate than existing systems, computing time-of-flight with an average error of 0.47 nanoseconds, or half than one-billionth of a second.

Vasisht presented the paper at this month’s USENIX Symposium on Networked Systems Design and Implementation (NSDI '16).

How it works
Existing localization methods have required four or five WiFi access points. This is because today’s WiFi devices don’t have wide enough bandwidth to measure time-of-flight, and so researchers have only been able to determine someone’s position by triangulating multiple angles relative to the person.

What Chronos adds is the ability to calculate not just the angle, but the actual distance from a user to an access point, as determined by multiplying the time-of-flight by the speed of light.

“Knowing both the distance and the angle allows you to compute the user’s position using just one access point,” says Vasisht. “This is encouraging news for the many small businesses and consumers that don’t have the luxury of owning several access points.”

Exploiting the fact that WiFi lets you hop on different frequency channels, the team programmed the system to jump from channel to channel, gathering many different measurements of the distance between access points and the user. Chronos then automatically “stitches” together these measurements to determine the distance.

“By devising a method to rapidly hop across these channels that span almost one gigahertz of bandwidth, Chronos can measure time-of-flight with sub-nanosecond accuracy, emulating with commercial WiFi what has previously needed an expensive ultra-wideband radio,” says Venkat Padmanabhan. a principal researcher at Microsoft Research India. “This is an impressive breakthrough and promises to be a key enabler for applications such as high-accuracy indoor localization.”

That said, getting an accurate time-of-flight with this method still isn’t easy, due to three sets of delays that happen during the transfer.

First, when you wirelessly send a piece of web data, there is a delay in detecting the presence of the “packet” that is hard to distinguish from the actual time-of-flight. To account for it, the team exploits the fact that WiFi uses an encoding method that transmits bits of packets on several even smaller frequencies.

Secondly, if you’re indoors the WiFi signals can bounce off walls and furniture, meaning that the receiver gets several copies of the signal that each experience different times-of-flight. To identify the actual direct path, researchers developed a mechanism to algorithmically determine the delays experienced by all of these copies. From there, they can identify the path with the smallest time-of-flight as the direct path.

Lastly, the team’s channel-hopping approach leads to one other complication: every time Chronos hops to a new band, the hardware resets, adding a delay known as a “phase offset.” To address this the team used the fact that in WiFi, you get an acknowledgement back for each data packet that your phone sends. The team uses these acknowledgements to intelligently cancel out the phase offsets.

The success of Chronos suggests that WiFi-based positioning could help for other situations where there are limited or inaccessible sensors, like finding lost devices or controlling large fleets of drones.

“Imagine having a system like this at home that can continuously adapt the heating and cooling depending on number of people in the home and where they are” says Katabi. “Eliminating the need for cooperation between WiFi routers opens up many exciting new applications for localization.”

Tuesday, February 23, 2016

Can technology help teach literacy in poor communities?

For the past four years, researchers at MIT, Tufts University, and Georgia State University have been conducting a study to determine whether tablet computers loaded with literacy applications could improve the reading preparedness of young children living in economically disadvantaged communities.

At the Association for Computing Machinery’s Learning at Scale conference this week, they presented the results of the first three deployments of their system. In all three cases, study participants’ performance on standardized tests of reading preparedness indicated that the tablet use was effective.

The trials examined a range of educational environments. One was set in a pair of rural Ethiopian villages with no schools and no written culture; one was set in a suburban South African school with a student-to-teacher ratio of 60 to 1; and one was set in a rural U.S. school with predominantly low-income students.

In the African deployments, students who used the tablets fared much better on the tests than those who didn’t, and in the U.S. deployment, the students’ scores improved dramatically after four months of using the tablets. "The whole premise of our project is to harness the best science and innovation to bring education to the world’s most underresourced children," says Cynthia Breazeal, an associate professor of media arts and sciences at MIT and first author on the new paper. "There’s a lot of innovation happening if you happen to be reasonably affluent — meaning you have regular access to an Internet-connected computer or mobile device, so you can get online and access Khan Academy. There’s a lot of innovation happening if you’re around eight years old and can type and move a mouse around. But there’s relatively little innovation happening with the early-childhood-learning age group, and there’s a ton of science saying that that’s where you get tremendous bang for your buck. You’ve got to intervene as early as possible."

Breazeal is joined on the paper by Maryanne Wolf and Stephanie Gottwald, who are, respectively, the director and assistant director of the Center for Reading and Language Research at Tufts; Tinsley Galyean, a research affiliate at the MIT Media Lab and executive director of Curious Learning, a nonprofit organization the researchers created to develop and deploy their system; and Robin Morris, a professor of psychology at Georgia State University.

Self-starting

The concentration on early literacy reflects Wolf’s theory, popularized in her book "Proust and the Squid," that the capacity to read, unlike the capacity to process spoken language, is not hard-coded into our genes. Consequently, early training is essential to establishing the neurological machinery on which the very capacity for literacy depends.

The researchers’ system consists of an inexpensive tablet computer using Google’s Android operating system. Wolf and Gottwald combed through the literacy and early-childhood apps available for Android devices to identify several hundred that met their quality criteria and addressed a broad enough range of skills to lay a foundation for early reading education. The researchers also developed their own interface for the tablets, which grants users access only to approved educational apps. Across the three deployments, the tablets were issued to children ranging in age from 4 to 11.

"When we do these deployments, we purposely don’t tell the kids how to use the tablets or instruct them about any of the content,dz Breazeal says. DzOur argument is, if you’re going to be able to scale this to reach 100 million kids, you can’t bring people in to coach kids what to do. You just make the tablets available, and they need to figure everything out from then on out. And what we find is, the kids do it. When we first did Ethiopia, we had all these protocols and subprotocols. What if it’s a week and they haven’t turned them on? What if it’s three weeks and they haven’t turned them on? Within minutes, the kids turn them on. By the end of the day, they’ve literally explored every app on the tablet."

Results

The Ethiopian trial, which the researchers conducted in collaboration with the One Laptop per Child program, involved children aged 4 to 11 who had no prior exposure to spoken English or any written language. After a year using the tablets, children were tested on their understanding of roughly 20 spoken English words, taken at random from apps loaded on the tablets. More than half of the students knew at least half the words, and all the students knew at least four.

When presented with strings of Roman letters in a random order, 90 percent could identify at least 10 of them, and all the children could supply the sounds corresponding to at least two of them. Perhaps most important, 35 percent of the children could recognize at least one English word by sight. These figures roughly accord with those of children entering kindergarten in the U.S.

In the South African trial, rising second graders who had been issued tablets the year before were able to sound out four times as many words as those who hadn’t, and in the U.S. trial, which involved only 4-year-olds and lasted only four months, half-day preschool students were able to supply the sounds corresponding to nearly six times as many letters as they had been before the trial.

Since the trials reported in the new paper, Curious Learning has launched new trials in Uganda, Bangladesh, India, and the U.S. In all, 2,000 children have had the opportunity to use the tablets.

Currently, the team is concentrating on analyzing data collected from the trials. Which apps do the children spend most time with? Which apps’ use correlates best with literacy outcomes? Curious Learning is also looking for partners to help launch larger pilot programs, with 5,000 to 10,000 children.

"There’s a core scientific question, which is understanding what the nature of this child-driven, curiosity-driven learning looks like," Breazeal says. "We need to understand how they learn, which is a fundamentally social process, where they explore the tablet together, they discover things through that exploration, and then they talk-talk-talk-talk, and they share those ideas. So it’s a profoundly social, peer-to-peer-based learning process. We have to have create a technology and an experience that supports that process."