Up to 880,000 bank cards may be at risk after a subsidiary of online travel agency Expedia reported a potential hack.
Travel fare aggregator Orbitz reported that hackers may have gained access to users’ personal information, including names, phone numbers, emails and billing addresses.
The company, which is owned by Expedia, offers booking options and deals on flights, accommodation and holiday activities.
It is both a consumer platform and a back-end provider for companies including American Express, which has said the issue could affect people who booked through Amextravel.com.
The breach is thought to have occurred between January 2016 and December 2017 for the partner platform and January and June 2016 for the consumer site.
“To date, we do not have direct evidence that this personal information was actually taken from the platform and there has been no evidence of access to other types of personal information, including passport and travel itinerary information,” Orbitz said.
Video: Uber ‘paid off’ hackers who stole data from 57 million people
The company said it addressed the breach after it was discovered in March last year, while American Express issued a statement saying the breach did not compromise its platforms.
The current Orbitz website is not thought to be affected by the breach.
Neil Haskins, director of security firm IOActive, said the breach “doesn’t look great” for the company.
“The data that may have been accessed is extremely personal,” he said. “With this information exposed, you can imagine the damage that could be done to the customers that have been affected.”
More from Science & Tech
Facebook chief under pressure as Cambridge Analytica scandal grows
MPs formally request Zuckerberg answer questions
Can you buy chemical weapons on the dark web?
Child abuse images hidden in Bitcoin blockchain
Bumble accuses Tinder of ‘bullying’ amid hostile takeover attempts
Computer imaging recreates Grenfell Tower fire
Expedia shares fell as much as 1.9% – but the tumble followed a decline of more than 8% in the last year.
Orbitz has said those affected can access a year of free credit monitoring and identity protection services.
Related Topics: Continue Reading
You may like
Tech
Apple unveils new emoji to represent disabilities
Apple has proposed a set of new emoji to provide better representation of people with disabilities.
The 13 emoji include guide dogs, hearing aids, prosthetic limbs and people using canes and wheelchairs.
Image: It is hoped that the emoji will be approved and released by next year
In a statement, the tech giant said: “Apple is requesting the addition of emoji to better represent individuals with disabilities.
“Currently, emoji provide a wide range of options, but may not represent the experiences of those with disabilities.
“One in seven people around the world has some form of disability, whether that be a physical disability involving vision, hearing or loss of physical motor skills, or a more hidden, invisible disability.”
Apple said its proposed additions are “not meant to be a comprehensive list of all possible depictions of disabilities – it is intended to be a starting point”.
Image: Apple worked with various charities on the new designs
While coming up with the new emoji, the tech giant worked with various disability charities including the American Council of the Blind, the Cerebral Palsy Foundation and the National Association of the Deaf.
If approved, the emoji are likely to be released early in 2019.
More from Science & Tech
Data row firm Cambridge Analytica’s offices raided after court order
Cambridge Analytica does not spell end of Facebook
Elon Musk deletes Tesla and SpaceX Facebook pages after calls from privacy campaigners
Musk set for $50bn pay package at Tesla
Sky Views: Facebook – the modern Frankenstein
What does Facebook do with my data – and how do I stop it?
The plans have received a warm reaction on social media, with Jordan Samuel tweeting: “This is awesome! It’s great to see support for the disabled community despite never using emojis it’s a great addition.”
Charles Matthews wrote: “At last Apple proposing more diverse representation in emojis.I am amazed it has taken so long.”
Tech
Scandal does not spell end of Facebook
By Adele Robinson, Sky News Correspondent in San Francisco
He has stemmed the bleeding, but is it enough?
What Mark Zuckerberg says to his staff at Facebook HQ will need to motivate, reassure, and above all convince them that the company they work for is still “on mission”.
He’s long talked about the social media platform being “a force for good”, but to the outside world right now that’s very much up for debate.
This is not just about being able to retain and recruit top talent; it’s the importance of addressing the company’s current existential crisis.
And the core values of the social media giant must not be forgotten here.
They incorporate key phrases like “build community”, “closer together” and “social value”.
But the one that stands out right now, rather ironically, is Core Value number three: “Move Fast”.
Well, that’s certainly not what Mr Zuckerberg has stuck to – at least publicly – so far.
His wall of silence lasted for nearly five days in the wake of the Cambridge Analytica scandal.
Video: ‘The FANGs have a dark side’
I was even parachuted into San Francisco at the height of the controversy to “doorstep” the man himself at his home in Palo Alto.
Unsurprisingly he wasn’t available, but the gesture was more symbolic than anything else.
I was never going to catch a glimpse of this man – his house was quite rightly surrounded by security – but thousands of his employees deserved better.
At an internal meeting at Facebook in Menlo Park this week he, and his chief operating officer Sheryl Sandberg, were reportedly notably absent.
He is now expected to address the company at a previously scheduled meeting on Friday.
Some analysts suggest that while trust in Facebook is at an all-time low, the tech giant will weather this storm as it has done many others.
Video: Mark Zuckerberg says he is ‘open’ to testifying on data scandal
Casey Newton, Silicon Valley editor at the Verge, admits there is an ongoing “cultural reckoning” with the platform but it’s not about to disappear (despite what the #deletefacebook movement suggests).
“This is not the first time Zuckerberg has had to apologise,” Ms Newton told me.
“In fact I was looking over the history and over the last decade there have been at least 11 times that he has had to come forward and apologise.
“It’s not unusual that one of the biggest tech companies is going to blunder and they are going to have to apologise… but is it all over for Facebook? No I don’t think so.”
And he’s right. It’s not. The platform still has two billion users worldwide.
But in the wake of the Russian election interference scandal, misinformation, and now this – the tide is certainly turning for Facebook from a political standpoint.
In the midst of multiple global investigations security and privacy is paramount.
The way our data is distributed and how we consent to that will no doubt change as a result of this.
And Silicon Valley security experts are watching and waiting for the impending regulation.
Sameer Dixit, security consultant at Spirent Communications in Silicon Valley, says that lessons will be learned across the board but there can never be “zero risk”.
“When we tie our seat belts in a car,” he said, “we are taking a step that in an event of a crash we get hurt less.
“It it doesn’t guarantee that nothing bad will happen to you. Similarly with incidents like this and learnings from this.
“There will be more regulations and more stricter privacy laws… but there is nothing called absolute security.”
More from Science & Tech
Data row firm Cambridge Analytica’s offices raided after court order
Elon Musk deletes Tesla and SpaceX Facebook pages after calls from privacy campaigners
Musk set for $50bn pay package at Tesla
Sky Views: Facebook – the modern Frankenstein
What does Facebook do with my data – and how do I stop it?
Facebook boss ‘open’ to testifying before Congress
This is, admittedly, probably less about the poster boy of the San Francisco tech elite and more about the larger issues at hand.
But for Mr Zuckerberg, who was last year touted softly online as presidential possibility, his defining moment may end up being wrapped up in one of the most significant and damaging privacy issues of the digital age so far.
Tech
Warrant granted to search data scandal firm
The High Court has granted an application by the Information Commissioner’s Office for a warrant to search Cambridge Analytica’s London office.
The ICO says it is “pleased with the decision of the judge” and will execute the warrant “shortly”.
“This is just one part of a larger investigation into the use of personal data for political purposes and we will now need time to collect and consider the evidence,” it said.
Information Commissioner Elizabeth Denham wants to access London-based Cambridge Analytica’s records and data.
The company, which uses data to change the behaviour of internet users, was hired by Donald Trump’s campaign team during the 2016 presidential election.
Image: The shared building houses the offices of Cambridge Analytica
The firm is accused of illegally harvesting the personal data of 50 million Facebook users.
It is alleged this information was given to Mr Trump’s campaign strategists to provide an insight into the thoughts of American voters, ultimately influencing the 2016 presidential election.
The data watchdog’s investigation includes the acquisition and use of Facebook data by Cambridge Analytica, its parent company SCL and academic Dr Aleksandr Kogan.
ICO granted warrant: We’re pleased with the decision of the judge and we plan to execute the warrant shortly. This is just one part of a larger investigation into the use of personal data for political purposes and we will now need time to collect and consider the evidence
— ICO (@ICOnews) 23 March 2018
Dr Kogan is the University of Cambridge professor who developed the app ‘This Is Your Digital Life’ through his company Global Science Research (GSR) in collaboration with Cambridge Analytica.
The app offered payment in return for users filling out a personality test and Facebook says it was downloaded by 270,000 people.
The app also allegedly gave Mr Kogan access to the lists of the downloaders’ Facebook friends.
Cambridge Analytica’s chief executive Alexander Nix has been suspended while Facebook founder Mark Zuckerberg has been called on to give evidence to MPs.
The judge told the court he will give his reasons for granting the application on Tuesday.
Both Cambridge Analytica and Facebook deny any wrongdoing.
More from UK
‘Devious’ Parsons Green bomber Ahmed Hassan jailed for life
Jeremy Corbyn sacks Owen Smith over call for second EU referendum
Salisbury nerve agent attack bench cut away and removed
Jeremy Corbyn slammed by Labour MPs for criticising removal of anti-Semitic mural
Recap: ‘We’ll need the lot’ – Harry and Meghan joke about baby products during NI visit
Ant and Dec dropped by Suzuki after drink-drive crash
Mark Zuckerberg on Wednesday admitted the company “made mistakes” but said steps had been taken to protect users.
He says he is “open” to testifying before the US Congress on the scandal.
Tech
The role of humans in self-driving cars is even more complicated after Uber’s fatal crash
Self-driving Uber
This is one of Uber’s self-driving SUVs like the one involved in a fatal accident on March 18.
Uber
A typical Uber driver has clearly defined responsibilities. Arrive on time, know your route, keep your car clean, and, most importantly, safely deliver your passenger to their destination. Sitting behind the wheel of a self-driving Uber—or any autonomous vehicle, for that matter—is, paradoxically, more complicated. A recent, tragic incident in which a self-driving Uber struck and killed a 49-year-old pedestrian, while a safety driver sat behind the wheel, has stirred up many conversations about blame, regulations, and the overall readiness of autonomous tech. The lingering question, however, is how we humans fit into this picture.
The Tempe Police Department released a 14-second clip of the moments leading up to the fatal Uber crash. It shows an outside video, which includes the victim, as well as an inside view of the cabin, which shows the reaction—or lack of reaction—by the person in the driver’s seat.
What does it mean to be a safety driver?
Both Uber and Lyft (the latter of which hadn’t responded for comment at the time of publication) have dedicated training programs to teach flesh-and-blood people how to act when behind the wheel of a car that drives itself. According to a schedule Uber provided to PopSci.com, the training program includes both theoretical and practical evaluations.
By our reading of these materials, it appears that Uber expects that a driver may sometimes need to take control of the vehicle, but the specific circumstances in which that’s the case are somewhat unclear. While Lyft is more tight-lipped about its onboarding process for new drivers, the company does provide a little more insight about when the human is meant to take over command.
In Lyft’s FAQ about its self-driving program, it states that the “pilots” are “constantly monitoring the vehicle systems and surrounding environment, and are ready to manually take control of the vehicle if an unexpected situation arises.” It goes on to specifically mention complicated traffic patterns, like detours, or humans directing traffic around things like construction.
While Uber’s guidelines may differ from Lyft’s, the concept of constantly monitoring the car’s surroundings has been a keystone in discussions about Uber’s fatal crash. The video appears to show the car’s driver looking down into the cabin rather than out in the direction the car is traveling.
“On two separate occasions, the driver seems to be looking down at something for nearly five seconds,” says Bryant Walker Smith, a leading legal expert in the arena of autonomous vehicle deployment. “At 37 miles per hour, a car covers about 250 feet in 5 seconds.” Average reaction time for a human driver is in the neighborhood of 2.3 seconds, which suggests a driver offering their full attention to the road may have been able to at least attempt to brake or perform an evasive maneuver.
Uber Self-Driving Course
This screenshot from Uber’s driver training video shows the course on which humans practice their role in automated vehicles before heading onto public streets.
Uber
Hands at 10 and 2
If you watch the Uber video regarding its self-driving pilot training, you can see the person keeps their hands close to the wheel during the trip, a detail the New York Times also included in its reporting about the story. That suggests an active role in the driving process, even though the car is meant to make all of the decisions.
But, the idea of a driver maintaining attention when not actively piloting the vehicle has been a sticking point since cars first started hinting at self-driving tech. Back in 2016, a Tesla got into a fatal collision when its autonomous systems couldn’t differentiate the white panels on the side of a truck from the brightness of the open sky. In that case, however, the driver reportedly didn’t notice—or chose to ignore—the vehicle’s calls for human intervention.
Researchers, including those at the Center for Automotive Research at Stanford, have been studying this moment of hand-off between autonomous systems and flesh-and-blood drivers for years, exploring options like haptic feedback in the steering wheel, as well as lights in and around the dashboard to indicate that something is wrong and intervention is required. Of course, all of that is irrelevant if the car doesn’t see trouble coming in the first place.
The navigation and self-driving tech in Uber’s vehicles is also a lot more advanced than Tesla’s semi-autonomous Autopilot mode, which isn’t meant to completely replace the need for a driver, and is still in beta according to the company. Uber uses LIDAR, a system that creates a 3D map of the areas surrounding the car using lasers, as well as a typical radar system, and cameras to detect objects before collisions.
On paper, these systems should have been able to detect a pedestrian in the road and send a signal to the driver to take over. While Uber hasn’t confirmed the exact way in which its system reacted just before and during the crash, it doesn’t appear that the driver received a warning—at least not with enough time to intervene.
Uber crash screenshot
The video of the Uber incident from March 18 (which is not linked in this article, but is readily available online if you want to see it) shows the driver looking down into the cabin in the moments before the crash.
Uber
Missed signals
The National Highway and Traffic Safety Administration is currently investigating the Uber crash, and has laid out a set of recommended, but ultimately voluntary, guidlines for states testing autonomous vehicles. In the section about Human Machine Interface, it lays out guidelines for human drivers, most of which involve interpreting signals from the car itself.
It does, however, suggest that truly driverless cars still need monitoring from the “dispatcher or central authority,” which would track the status and condition of the car in real time. In essence, that central authority plays a similar role to a person sitting in the front seat. But a remote operator presumably wouldn’t have a camera pointed at them at the time of an incident, which adds another layer of complexity to the already-muddled topic of blame.
Even when these systems work correctly, handing over controls to a human doesn’t always avoid accidents. In fact, Google found that all of the accidents it experienced in the early days of testing its Waymo self-driving cars, came as a result of human intervention.
Cruz AV
GM’s 2019 autonomous rides won’t have a steering wheel or pedals.
GM
Driver not found
One fact that even further complicates the current conversion about blame is that Arizona’s relaxed guidelines regarding self-driving cars don’t require a human pilot of any kind. Google-owned Waymo is currently operating a self-driving taxi fleet in Arizona with no drivers at all. And GM recently confirmed that it would continue with its plans to test a fleet of its self-driving Cruz AV cars—which don’t have a steering wheel or pedals, precluding humans from giving any input.
Uber has temporarily suspended its autonomous taxi testing during the investigation of last week’s incident, but there’s no indication that the event will affect how these vehicles are governed.
California has allowed self-driving car testing since 2014, but until now, the state has required a driver behind the wheel. A representative from the California DMV provided the following guideline for companies testing on public roads: “A driver must be seated in the driver’s seat, monitoring the safe operation of the autonomous vehicle and capable of taking over immediate manual control of the autonomous vehicle in the event of an autonomous technology failure or emergency.”
On April 2, however, the California DMV will begin issuing permit applications for driverless deployment and testing, but according to a representative, it hasn’t received any applications for that kind of permit yet. The state does not have any intention of changing its outlook on these vehicles in light of the Uber incident.
Giving the people what they want
While truly driverless cars are clearly the endgame of this technology, the companies testing them apparently don’t think passengers are quite ready to step into an empty car and zoom off to their location.
One key responsibility of a Lyft pilot in an autonomous vehicle is to explain some of the technology and the process to passengers as they get in. The drivers aren’t permitted to talk during the ride, but their pre-departure spiel is partially designed to get riders over the mental hump of letting the machine take charge.
A 2016 Cox Automotive survey found that roughly 47 percent of responders said that they feared the computer system in the automated vehicle could fail. In that survey, 27 percent said they wouldn’t be able to fully relax with the AV totally in control.
The fallout
While many experts consider this first human casualty of true self-driving tech to be inevitable, it’s unclear how things will play out when it comes to both rider sentiment and possible regulation. “The mood in some states (especially in the Northeast) and even in Congress could shift somewhat toward a larger supervisory role for governments,” says Cook. “Some developers may support that, either because they need credibility or they want to protect their credibility from less reputable would-be developers.”
According to Tempe Police, there are currently no criminal charges on file toward Uber or the driver that was in the car during the fatal crash. The family of the victim, however, has mentioned the possibility of civil action, though they haven’t specified exactly who will be the target of such a lawsuit.
This nebulous web of rules, regulations, and responsibilities make sitting behind the wheel of a self-driving car more complicated than piloting the vehicle yourself—at least for now.
Tech
Five musical instruments and apps that teach you how to play
Melody machines
Travis Rathbone
Hearing music can make us dance, laugh, or cry; it has the power to excite us or give us goosebumps. Playing music, on the flipside, can make you smarter. Learning an instrument improves your brain’s executive function—the ability to manage resources and achieve goals. In doing so, being musical also strengthens your capacity to consider multiple concepts at once, a key facet of creative thinking. Instruments with baked-in teaching tools might be no replacement for an experienced human instructor, but they’re easy, at-home ways to help start the process. Bonus: They won’t get on your case about practicing your scales.
1. Strum the strings
Fretlight FG-621 guitar’s translucent polymer fretboard show your fingers where to hold the strings to craft chords. Compatible smartphone apps, such as Guitar Tunes and MyJam, wirelessly send signals to those lights to guide you through fingerings, scales, and power chords. You can slow down or speed up the lessons to match your skill level. Once you’ve mastered the basics, turn off the lights and rock out on your legit electric ax until your fingers bleed from too much shredding.
2. Slap the skins
A beginning percussionist’s practice sessions can sound like an elephant charging through the local hardware store, so there’s good reason to be thankful for the volume-controllable rubber drums on Yamaha’s DTX 400 electronic setup. Each kit includes pads to represent the toms, snare, kick, and cymbals that you’d find in a standard set. Its built-in training mode plays examples of the most common patterns and rhythms in genres like rock and jazz. Groove along at increasing tempos until you’re pounding out rhythms like Neil Peart.
3. Tickle the keys
With 61 full-size keys, optional battery power, and 400 selectable instrument sounds, the Casio LK-260 is a familiar sight in music classrooms, but it can teach you to tap out tunes all on its own. The keys light up in sequence to show you the notes for simple scales to complex compositions. The onboard teaching system guides you through a gradual learning process: First you listen, then watch, then jam along. You can speed up the beat as you progress and review your performances via a built-in digital recorder.
Want more news like this?
Sign up to receive our email newsletter and never miss an update!
By submitting above, you agree to our privacy policy.
This article was originally published in the Spring 2018 Intelligence issue of Popular Science.