- Log In
The recently revealed KeyRaider is yet another proof point that malicious actors are looking to tinker with iOS.
It’s a piece of malware that affects jailbroken iOS devices and was distributed through a Chinese repository which could be used by Cydia users. Because of this, its exposure was relatively limited.
KeyRaider’s goal is to allow anyone with a jailbroken device running specific instances of KeyRaider to spoof in-app purchases without having to pay, and download paid apps from the Apple App Store, though we haven’t been able to definitively confirm this functionality.
The tool effectively lets people steal from both Apple and mobile developers, and while that’s shady enough, it actually does this by pilfering and using other people’s legitimate Apple account information. It can be thought of as having two sides: some components provide the promised in-app purchase spoofing functionality while others may steal Apple account information, according to researchers from Weip Tech working with Palo Alto Networks.
These are all up-and-coming device manufacturers (some more up-and-coming than others) that use alternate forms of Android (read: versions of Android that are not controlled by Google), and they are quickly shaking up the mobile market.
Droppers — no, they’re not just the tool you use to administer eye-drops or medicine. They’re also a tool used by malicious actors to quietly install apps, of which some may be malicious, onto your device.
As it comes to mobile, droppers are apps that either have or pretend to have the functionality of popular apps, such as games and utilities, but they also install additional applications to a device that can be malicious, or steal your data.
While a popular delivery mechanism for PC malware, recent research on Hacking Team’s Android malware suggests that droppers played a key role in distributing the company’s spyware. This is because droppers address one of the biggest challenges for malicious actors: how the malware actually gets onto the target device.
The “easy answer” is to publish the malware on the Google Play store, or the Apple App Store if the target is an iOS user. Most people download their applications from these marketplaces, thus placing malicious apps here is a potential goldmine for bad guys. Unfortunately for them, Google and Apple have actually done a great job of keeping their marketplaces fairly malware-free.
A next best option is often a dropper.
Lookout recently researched another piece of malware called NotFunny that was delivered through very similar means. NotFunny is adware, or an app that serves obtrusive ads that interfere with standard mobile operating experiences and/or collects excessive personal data that exceeds standard advertising principles. NotFunny has two sides: a dropper and its payload. The dropper masqueraded in Google Play as a Christmas ringtone app and other seemingly harmless utility or gaming apps. Once a victim opened the application, however, it would trigger a secondary app — the adware — to download to the device.
Lookout also saw this with the Gamex Android Trojan, which was installed by dropper apps like, again seemingly harmless, ad blockers and device performance improvement tools. We believe Mouabad.p, premium SMS fraud malware, was delivered through a dropper as well.
Droppers are effectively a bait and switch: you think you’re getting one thing, but wind up getting more than you bargained for. It highlights the importance of knowing what you’re downloading:
While the federal government might be under the impression that it doesn’t have a BYOD program, it is overlooking a key issue: Shadow BYOD.
Shadow BYOD is very similar to Shadow IT, in which employees use technologies — usually to enhance their productivity — that the IT department has not sanctioned or deployed. In Shadow BYOD’s case, it’s the issue of unmanaged personal devices connecting to the network and accessing government or corporate data.
The mobile ecosystem is moving toward economical smartphones. They are customizable and much more affordable than the $600 plus Android phones you might see on the market.
This poses a problem for enterprises which, to date, have relied on the app testing and vetting process applied to Google Experience devices, and the fact that app downloads on these devices are by default funneled through Google Play. Non-Google Experience devices introduce much more fragmentation.
With connected automobiles, the stakes for getting security right have never been higher. “What’s the worst that could happen?” is a lot more serious when you’re talking about a computer that can travel 100+ MPH.
When an industry without experience in Internet security starts connecting things to the Internet, it typically makes a number of mistakes both in how it implements secure systems, and how it interacts with the security community.
My colleague Marc Rogers and I set out to audit the security of the Tesla Model S because we wanted to shine a light on a car that we hypothesized would have a strong security architecture, given the Tesla’s team’s deep software experience. Out of this research, we hoped to be start a conversation about simple and clear security best practices for the automotive industry.
That hypothesis turned out to be correct: The Tesla Model S has a very well designed security architecture, that we believe should serve as a template for others in the industry. We also found a number of vulnerabilities that allowed us to, with physical access to the vehicle, to gain root access to two of the infotainment systems: the instrument cluster (IC) above the steering wheel, and the 17-inch touchscreen center information display (CID) in the middle of the dash. This allowed us to perform a number of tasks, such as remotely opening and closing the trunk and frunk, locking and unlocking the doors, starting the car, and stopping the car.
However, this research focused on answering the question: how can we make cars more resilient to attack, assuming attackers can get into the infotainment systems. All of the exploitation performed was done with physical access and we did not demonstrate any remotely executable exploits. There is sufficient research already done that proves cars can be exploited remotely. Further, we believe it to be a relatively conservative assumption that any browser running WebKit will be exploitable to an attacker with sufficient skill or resources.
Connected cars are about to change the auto industry’s assembly line.
Vehicles are becoming computers on wheels and now have more in common with your laptop than they do the Model T. Just as smartphones have supplanted non-Internet-connected phones, connected cars will supplant non-Internet-connected cars. Auto manufacturers need to become software companies if they want to survive into the 21st century. To that end, the auto industry must now consider cybersecurity as an integral part to how cars are built, just as physical safety became a critical part of how cars were built in the late 20th century.
When an industry without experience from the front lines of Internet security begins connecting its products, one of two outcomes often occurs. If there are clear security best practices, then most companies will (hopefully) implement those best practices. If there are no clear best practices, companies will likely make a lot of security mistakes, resulting in major cybersecurity problems down the road. My research partner, Marc Rogers of CloudFlare, and I decided to help make sure those clear best practices were in place for the auto industry.
Last week, the world learned about critical vulnerabilities in Stagefright, an open source media player used by 95 percent of Android devices, or roughly one billion devices worldwide. In addition to the sheer number of people that are likely at risk, this vulnerability is especially scary because if it can be delivered via MMS (which is automatically downloaded to the device by default), the code can remotely execute on your device without you actually doing anything. It would then have unfettered access to the camera, microphone, contacts, and photos – very personal stuff.
Now the real kicker. You will need to wait for a pending security update from your carrier, device manufacturer or Google to ultimately patch this vulnerability and be completely safe. To check if a patch is available for most Android devices, go to Settings and click System Updates.
That’s why we’ve developed Stagefright Detector. This app arms you with information by telling you whether or not your Android device is vulnerable to Stagefright. If you are affected, we provide the run-down on how to mitigate your risk of being attacked. You’ll also be able to check back in when you receive your security patch to confirm it contained the fix for Stagefright. Read More
Yesterday a security researcher revealed a series of high-severity vulnerabilities related to Stagefright, a native Android media player, that affect nearly all Android devices in the world. The Stagefright vulnerabilities carry serious security implications: an attacker could exploit them to remotely control and steal data from a device by sending a victim a multimedia message (MMS) packaged with an exploit.
This week, the security world exploded with the news that Hacking Team, a vendor of Italian spyware — software that captures Skype, message, location, social media, audio, visual, and more data, and is marketed as “stealth” and “untraceable” — was hacked.
One of the major takeaways is that a significant number of governments in the world, Hacking Team’s customers, are actively seeking to compromise iOS and Android devices, likely to access the trove of data stored on or accessed by these mobile devices.