SouthWest Password Ad is both Good and Bad.

Southwest Airlines recently aired a TV ad in their “Wanna Get Away” series that features some serious password blunders. In the ad a General is asked for his password so that they “can lock down the system” which he then reluctantly provides. The password, “ihatemyjob1”, is rather embarrassing and hilarity ensures. Lets watch…

https://www.ispot.tv/ad/AEjj/southwest-airlines-wanna-get-away-sale-sharing-your-password

 

Let us count the bad security practices used in this ad…

1. A Single point of failure. (The General)
2. He verbally shares his password for everyone to hear instead of typing it in himself.
3. The password is displayed without a mask.
4. The password is displayed in 100 point type on a 20 foot screen for everyone in the room to see.
5. Password does not use uppercase or special characters.
6. While the password uses a number it is appended to the end.
7. No 2 factor authentication.
8. Everyone who sees this ad thinks that while ‘ihatemyjob1’ may be an embarrassing password it is perfectly acceptable since a general uses it.

Let us count the good security practices in this ad

1. The password is longer than eight characters.
2. The password uses a number.
3. Everyone who watches this ad hopefully realizes that they use a similar password and quickly changes it to something better.

Lets face it, while slightly funny this ad will make no one stop and think about how secure their own password may or may not be. However, it might make some people think that ‘ihatemyjob1’ or something similar is perfectly ok to use.

Addendum: The general’s uniform in this ad is a disgrace. Although probably done on purpose so as to not offend any one service they have in fact offended all services.

Tilting It Sideways

Trying to track down the origins of an Internet meme can be an almost fruitless endeavor. Other than giving credit to its originator and perhaps giving them a few minutes of Internet fame there really isn’t a lot at stake by determining who was the kid in the success.gif or what meme Laina Morris is responsible for. Finding the origin of a story involving the breach of critical infrastructure however, can be rather important.

Like funny Internet memes, stories about compromises of water plants, steel factories, power companies or other systems controlled by SCADA or ICS can be repeated over and over until they are accepted as facts with no one questioning their authenticity. Previous events such as power outages in Brazil, a water pump failure in Illinois, the improper shut down of a blast furnace at a German steel mill, a pipeline explosion in Turkey were all originally attributed to cyber attacks. In fact cyber attacks were blamed in almost all cases not because there was any actual evidence but rather the lack of any other explanation. Since nothing else could have caused the problem it must have been those meddling hackers.

I recently heard of a new incident that seems to fall into this same scenario. The story claims that hackers broke into the control system of a floating oil rig off the coast of Africa, somehow messed with the ballast control and caused the rig to tilt. The rig had to be taken offline while the systems were cleaned up. As with most of these types of stories no supporting information is given. No actual dates, no name of the oil rig or its owner, even the location in this story is vague, ‘off the coast of Africa’, an entire continent.

Continue reading

Transcription of L0pht Testimony

Transcription of the YouTube Video:
Hackers Testifying at the United States Senate, May 19, 1998 (L0pht Heavy Industries)
https://www.youtube.com/watch?v=VVJldn_MmMY

Transcribed by:
https://www.fiverr.com/alx_does

Senator Thompson: …If you gentlemen would come forward.. We’re joined today by the seven members of the L0pht, Hacker think Tank in Cambridge Massachusetts. Due to the sensitivity of the work done at the L0pht, they’ll be using their hacker names of; Mudge, Weld, Brian Oblivion, Kingpin, Space Rogue, Tan, and Stephen Von Neumann. Gentlemen…

Off Camera: I thought you were the Kingpin?
(Laughter)

Senator Thompson: I ah, I hope my grandkids don’t ask me who my witnesses were today, and say.. Space Rogue…

But we do, we do understand your — and do appreciate your being with us. Do you, ah, May I ask your name?

Mudge: I’m Mudge

Senator: Mudge would you like to make a statement?

Mudge: Yes I would. Emmm! Thank you very much for having us here. We think this is hopefully a very great step forward and are thrilled that the government in general is, is starting to approach the hacker community, we think it’s a tremendous asset that the hackers bring to the table here, an understanding! Emm! My handle is Mudge and I and the six individuals seated before you, which we run down the line: Brian Oblivion, this is John Tan, King Pin, Weld Pun, Space Rogue, and Stephen Von Neumann… We make up the hacker group known as The L0pht. And for the last four years, the seven of us has been touted as just about everything, from The Hacker Conglomerate, The Hacker think tank, the hangout place for the top US hackers, Network security experts and the Consumer watch group. In reality, all we really are, is just Curious. For, well over the past decade, the seven of us have independently learned and worked in the fields of satellites communication, cryptography, operating systems’ design and implementation, computer network security, electronics and telecommunications.
To other learning process, we’ve made few waves with some large companies such as Microsoft, IBM, Novell, and Sun Microsystems. At the same time, the top hackers, and the top legitimate cryptographers, and computer security professionals pay us visits when they are in town, just to see what we’re currently working on.. so we kind of figured we must be doing something right.
I’d like to take the opportunity to let the various members talk about few of their various projects, their current projects and what they are going to be working on the future. Emm! Weld?

I watched CSI:Cyber so you don’t have to.

CSI has a proven formula for making popular TV shows. Unfortunately that history does not include accurate TV shows. When it comes to tech and things ‘cyber’ this is probably the preeminent example of CSI being bad and wrong at the same time. I thought there was no way they could top this, I was wrong.

Hollywood has had a long history of doing tech wrong. Take a look at the recent Scorpion TV show, on second thought don’t, its almost as bad. Occasionally Hollywood does get Tech correct, like with the recent Blackhat movie, but while the tech was good the movie itself was bad for other reasons. The last time, perhaps the only time, Hollywood got the movie and the tech right was Sneakers, which is coming up on a quarter century in age.

While I think it is great that TV shows like this bring technical issues to a mass audience, scaring people into thinking that the Internet is out to get them is probably not in anyone’s best interest. Humans often do stupid things when they are scared.

Let me talk first about the few things that CSI:Cyber got right. The show mentions that social media is a huge aide to law enforcement and one of the characters jokingly says that’s why he doesn’t use it. This is absolutely correct; Facebook, Twitter and other sites are often the first step in an investigation of any sort, often even before they interview witnesses or suspects.

The softball shaped camera that is thrown through an open window into the bad guys lair near the end is an actual thing that is actually used by law enforcement. They got this right.

In another scene one of the technical characters, who is labeled as ‘the greatest hacker in the world’ (I’m not even going to touch that statement) claims that RATs or Remote Access Trojans are easy to get for $40 on the ‘surface net’. He is right about the easy to get part although his price is a little high and I have no idea what the ‘surface net’ is. But yes, tools that online criminals use like RATs are very easy to come by. The thing about Remote Access Trojans is that they are very similar to legitimate Remote Access Tools like say Go To My PC or Remote Desktop,

Probably the most important thing that they got right in this show was when the Worlds Greatest Hacker was berating the lowly tech employee for allowing a vulnerability to exist in the companies software and the tech guy responds with “I took it upstairs but they didn’t listen.” This is an all to common theme that is often repeated in the information security world. Company executives often refuse to listen to security concerns and instead focus more on the bottom line. This is probably the single truest thing this show got right.

The second most important thing they got right was the weak security present in many Internet connected cameras. Many such cameras have default passwords and are easily searched for over the Internet allowing anyone to connect to the camera and watch and listen to what is happening. There have been cases where people will connect to a camera and then yell at the sleeping baby. Manufacturers of these cameras were told about their default password problems but most refused to fix the problem, that is until these stories started to hit the press and the FTC started to levy fines. Even after the companies issues an update to the devices firmware it is up to the owner of each camera to learn about the update and apply the patch themselves. This seldom happens leaving tens of thousands of devices installed in peoples homes that anyone can access.

Other than that just about everything else in the show is just completely unbelievably wrong. Not only are things wrong but they play on known false tropes, like that lead can block radio signals (it can’t), that convicted criminals are allowed to work in the field on active investigations, that you can quickly separate overlaid audio and translate it, that you need big wall sized monitors in order to catch bad guys, that hackers who could be half way across the world are conveniently just an hour or less away, that non-smart phones can have GPS aps and that cops treat forensic data so carelessly.

One of the most egregious examples was the speed at which the characters analyzed the cameras source code and it came up all green and then turned red. Source code doesn’t just magically turn red when malware is found. Reverse engineering is painstakingly hard, and it takes a lot of time. If code could just magically turn red if it did bad things, like it does in this show, the world would be a much much better place.

I was especially troubled by one of the statements made early in the show “Any crime involving electronic devices is by definition, cyber” While this is just a TV show there are people who believe this or at least will be influenced by this. This scares me as I guess that makes my electric drill cyber.

Also I loved how the characters on the show could do these crystal clear remote videoconferences from remote locations? How? They never bothered to explain where the camera was or what are they are using for bandwidth. If they did it with their cell phones I want to get on that data plan.

And I could not overlook that they had the one black person on the show repeat a racist nursery rhyme “Einie meane miny moe, catch a…” well they changed the word on the show but I’m really surprised they let that through.

If you didn’t watch this show you didn’t miss anything, at all, and I encourage you not to watch it, in fact just forget that that it exists and with any luck it will be canceled. And then we just have to wait for the next TV show to do tech wrong.

In the Beginning There was Full Disclosure

Two of the largest companies in the world are bickering with each other about how best to protect users. I won’t get into just how historically hypocritical this is for both Microsoft and Google or how childish it makes them both look but it brings up a debate that has been raging in security circles for over a hundred years starting way back in the 1890s with the release of locksmithing information. An organization I was involved with, L0pht Heavy Industries, raised the debate again in the 1990’s as security researchers started finding vulnerabilities in products.

In the beginning there was full disclosure, and there was only full disclosure, and we liked it. In the beginning the goal was to get stuff fixed, it wasn’t about glory, it wasn’t about bug bounties, it wasn’t about embarrassing your competition. No, in the beginning it was about getting bugs fixed. It was about getting the software that you used, the software you deployed to your users, it was about getting it fixed, getting it to be safe. However, in the beginning vendors didn’t see it that way, many of them still don’t. Vendors would ignore you, or purposely delay you. There is no money in fixing bugs that no one else is complaining about so most vendors wouldn’t fix them, at least not until it became public and all of their customers started to complain about them. That was the power of full disclosure.

Vendors of course hated full disclosure because they had no control over the process, in fact there was no process at all and so they complained, vociferously. Vendors talked about ethics and morality and how full disclosure helped the bad guys. So a guy named Rain Forest Puppy published the first Full Disclosure Policy promising to release vulnerabilities to vendors privately first but only so long as the vendors promised to fix things in a timely manner. If the vendor didn’t get stuff fixed the researcher could still pull out their most effective tool, full disclosure, to get the job done.

But vendors didn’t like this one bit and so Microsoft developed a policy on their own and called it Coordinated Disclosure. Coordinated Disclosure calls on researchers to work with the vendor until a fix can be released regardless of how long that takes. Under Coordinated Disclosure there is no option for Full Disclosure at all. Of course Coordinated Disclosure assumes that the vendor is even interested in fixing the bug in the first place.

The problem that many companies who have vulnerability disclosure policies don’t realize, such as Microsoft, is they have forgotten that they are not the ones in control. Vendor disclosure policies are not binding on the researcher. It is the researchers choice whether or not to follow a company’s disclosure policy. Vendor policies work great for the vendor, it gives them all the time in the world to fix a bug but for researchers who want to get stuff fixed such policies can be a major pain to work within.

Disclosing vulnerabilities isn’t an easy thing. In the mid nineties at L0pht Heavy Industries we quickly found that vendors had absolutely no interest in fixing bugs at all and instead would prefer that we just kept our mouths shut. A lifetime later it was part of my job to help coordinate vulnerability disclosure with various vendors that were found by our pentesters. If you’re a lone researcher and only have one vulnerability its not such a big deal, you send a few emails, wait a little while and if the vendor is cooperative a fix is pushed out in a few days or months time. If you happen to have several dozen vulnerabilities that you are attempting to get fixed, all at the same time, and all by different vendors, the process can be a little more involved. In fact simply coordinating these disclosures can be a full time job for multiple people within an organization. There is no ROI here either, the ‘simple’ process of attempting to disclose vulnerabilities eats up revenue in the time your employees take trying to coordinate vulnerabilities and get stuff fixed.

In 2009 several researchers found the disclosure process so onerous that they started the “No More Free Bugs” campaign and promised not to release any vulnerabilities for free. In response vendors started bug bounty programs where they rightly paid researchers for their hard work. However, even that process comes at a cost for both the vendor and the researcher. So much so that there are now third party companies that will help vendors run bug bounty programs and help researchers disclose vulnerabilities.

Of course there are still vendors who refuse to fix stuff or who wait forever to do so. According to Tipping Point’s Zero Day Initiative there are currently 212 known security vulnerabilities without fixes, several of which are over a year old. It is likely that the only way any of these ancient bugs will get fixed is by pulling out the old standby of Full Disclosure. In fact Tipping Point has threaten to do just that, giving vendors just six months to get stuff fixed before they publish limited details on the bugs.

This has all lead us to the point where Google has a disclosure policy that basically says they’re going full disclosure in 90 days if the bug is fixed or not. And the point where Microsoft is asking for just a few more days so they can include the fix with their regular Patch Tuesday. Two big kids who should be setting the example are instead acting like a couple of teenagers on the playground. How does any of this get stuff fixed and protect users?

This is why you see many companies and individual researchers not disclosing anything at all, and this should not happen. And I haven’t even gotten into the issue of vendors filing lawsuits against researchers as a means to keep them quiet.

The entire process has gotten out of hand. The number one goal here should be getting stuff fixed because getting stuff fixed helps protect the user, it helps defeat the bad guys and it helps make the world a better place.

Microsoft says that full disclosure “forces customers to defend themselves” which is the wrong way to look at it. Full disclosure allows companies to defend themselves if they so choose. The opposite is non-disclosure, which helps no one. Just because a bug hasn’t been disclosed doesn’t mean it is not there. It doesn’t magically pop into existence only when someone publishes something about it. The bug is there, waiting to be found. Maybe the bad guys already found it. Maybe they are already using that bug against you. And yet you are blissfully unaware that the bug even exists. Full disclosure gives you knowledge that you can use to protect yourself even if a patch is not available. You can choose to turn off the affected device, or add additional protections to your environment to help you mitigate the risk. Once disclosure happens this is now your choice, you can evaluate the risk this particular bug presents to your environment and make an educated decision of what steps to take depending on your own risk tolerance. While most users will continue on blissfully unaware or choose to ignore the information that too is their choice, not Microsoft’s, and not Google’s.

Google’s goal of getting everything they find fixed within three months is laudable but unrealistic. Some bugs just take a little bit longer to verify, develop patches for, and test. It is not unreasonable to be a little flexible if you feel the vendor is working in good faith to develop a patch. To arbitrarily go full disclosure when you know the vendor has a patch just days away is immoral. It puts users at risk and makes you look like a stubborn child.

In this particular case both the vendor and the researcher are wrong. Microsoft obviously communicated the status of the fix to Google and told Google when to expect the patch. It is not unreasonable for Microsoft to ask for a few extra days and it should not be unreasonable for Google to grant such a request. On the other hand I am sure Google informed Microsoft that they would only wait 90 days before going full disclosure, Microsoft was informed of the risk of full disclosure and should have pushed harder to meet the deadline.

And so the disclosure debate continues unabated for over a hundred years. With two of the giants in our industry acting like spoiled children we as security professionals must take the reigns from our supposed leaders and set a better example. It needs to be about protecting the user. It should not be about grandstanding or whining or even making a buck, in the end it should be about getting stuff fixed.

UPDATE 2015.02.13
Google has made an update to its 90-day disclosure deadline. They have decided to make allowances for deadlines that fall on weekends and holidays and more importantly have granted a grace period for vendors who communicate their intent to release a patch with 14-days of the 90-day deadline. It is nice to see vendors and researchers working together. The goal here should be to protect the users and not embarrass vendors. This grace period shows an understanding of the issues surrounding disclosure that impact vendors while at the same time continuing to hold them to a high standard.

Interested in reading more?

Microsoft’s latest plea for VCD is as much propaganda as sincere – OSVDB

Microsoft blasts Google for vulnerability disclosure policy – CSO Online

A Call for Better Vulnerability Response – ErrataSec

Four Unnamed Sources

Or: If a pipeline explodes in the desert and there is no one there to hear it was it really a cyberwar attack?

No one questions the importance of keeping abreast of current trends and developments with regards to information security. Whether it is new malware techniques, attack vectors or just the motivation of some attackers. That means looking into the details of the Target and Sony breaches, checking out the specifics of Heartbleed and Poodle, and keeping abreast of the latest patches from Microsoft and other vendors. It also means trying to separate the facts from the fear, uncertainty, and doubt used to generate page views.

One recent story has me questioning if a pipeline explosion in Turkey was actually in fact an early example of cyberwar. The article claims that a large explosion along the Baku-Tbilisi-Ceyhan (BTC) pipeline, near the Eastern Turkish city of Erzincan on Aug. 7, 2008 was in fact a cyber attack. The article attempts to downplay claims of the Turkish government who said the explosion was caused by a malfunction, as well as discounting the claims of the Kurdistan Worker’s Party who claimed credit for the explosion despite the groups history of blowing up pipelines. Of course there was also a statement by the Botas International Ltd. company which operates the pipeline which said that the pipeline’s computers systems had not been tampered with.

The explosion occurred two years before Stuxnet and while I doubt Stuxnet was the first operation of its kind the evidence to support a similar type of attack on this pipeline is mostly circumstantial at best. Even if this was a cyber attack it would not “rewrite the history of cyberwar,” as one expert quoted in the article claimed. It would just add one more data point to an already interesting history. Unfortunately the article does not give any proof that this was in fact a cyber attack.

Certainly the article lists plenty of circumstantial evidence to support the theory of a cyber attack to blow up the pipeline but the actual proof comes down to “four people familiar with the incident who asked not to be identified.” Obviously in some cases journalists must rely on unidentifiable sources however usually when they must do so the information provided is corroborated by other authoritative and named sources. That is not the case here. All of the named quotes in the article are speaking in general terms, adding background if you will, and are not speaking directly to this event.

Pipeline and cyber attacks have a long history in and of themselves that goes back at least as far as 1982 when the CIA convinced a Canadian company to deliberately put flaws into pipeline control software that was then sold to the Soviet Union. This reportedly led to a massive explosion along the pipeline in June of that year. This story also has its detractors, some saying the explosion was caused by poor construction and others saying it was flawed turbines and not flawed software that caused the Siberian explosion.

There was also a confidential report released by DHS in early 2013 claiming that key personnel in 23 different gas pipeline companies had been targeted by Chinese hackers with spear phishing attacks. And lets not forget the plot of the movie DieHard 4 where the evil hacker bad guy is able to redirect all the natural gas in the pipelines to converge on a power station causing a massive Die Hardesque explosion.

One really has to ask themselves why would anyone go to such great lengths to disrupt a pipeline when a simple misplaced cigarette butt can cause a massive explosion like what happened in Kenya in 2011 killing over 100 people. Stuxnet is thought to have required numerous teams of coders working for several months to create the software to disable the centrifuges at Natanz, a task that arguably could be accomplished in no other way. There are a lot more efficient ways to blow up a pipeline than to expend months of effort and untold dollars to accomplish what a small team and some explosives could do just as well if not even more efficiently.

So was the explosion along the Baku-Tbilisi-Ceyhan (BTC) pipeline an early act of cyberwar potentially setting back the clock on the earliest known cyber operation of this size? Sure, its possible, but without additional facts from someone other than an ‘unnamed source familiar with the incident who asked not to be identified” I will have my doubts. Until those facts are presented I’ll go back to reading my Microsoft Patch Tuesday reports.

UPDATE 2015.02.16
I was just sent this link
https://cablegatesearch.wikileaks.org/cable.php?id=08BAKU790
which indicates that physical security of the pipeline would be difficult if not impossible and it further supports that PKK was the primary suspect for the explosion via conventional means. The cable makes no mention of a cyber attack of any kind.

UPDATE 2015.06.19
An internal report now states that “A cyber attack would not have been possible in the described way”, The report goes on to say that the valve stations which were allegedly tampered with, were not connected to a network that could be remotely accessed anyway. Here is the original turkish article.

Additional Reading
Looks like I wasn’t the only one with a problem with this article.
Cyberwar revisionism: 2008 BTC pipeline explosion

All of this has happened before and all of this will happen again

Two teenagers in Winnipeg Canada somehow got the idea to see if the default password on a Bank of Montreal ATM machine was still valid. The got the default password after finding the operators manual for the ATM online. As is often the case the default had not ben changed and was still valid. Instead of taking all the money they could carry and running away the kids instead went to the bank to let them know. Of course being fourteen-year-old kids they went to their local branch, and where, being fourteen-year-old kids, no one believed them. The kids had to go back to the ATM and get it to print out stats like how much money was still in the ATM before the bank branch manager believed them enough to notify the banks security department.

There are a lot of things that can be learned from this story, or actually should have already been known. If these kids had tried this in the United States, despite their good intentions, they may have been charged with a violation of the CFAA (Computer Fraud and Abuse Act). If the bank manager had not been so understanding I am sure they could have been charged with the Canadian equivalent. Testing for default passwords on bank owned ATMs is probably not the smartest way to utilize your free time.

The branch manager should have taken the allegation seriously the first time, regardless of how old the people with the information were. Instead the branch manager evidently told the kids that what they initially reported was impossible. This shows a serious lack of security awareness training for Bank of Montreal employees.

What about the bank itself? Why did the Bank of Montreal leave a default six-digit password on an ATM machine? It is unlikely that only one machine out of several hundred ATMs was configured with the default password. I hope BMO gets around to changing all those defaults before someone is able to make off with the cash.

The worst part about this story I think is that all of this has happened before. A lot of people have heard about the presentation at the Blackhat conference in 2010 by the late great Barnaby Jack where he made an ATM spit out money on stage. That was sort of sensational and required access to the back of the machine. But what about the arrest of two people in Lincoln, Nebraska in 2008 when they used default passcodes to steal money from an ATM? Or the thefts in Derry, PA in 2007 from Triton 9100 model ATM after the default passcodes were found online? Or again in Virginia Beach, VA in 2006, this time using default passcodes in the Tranax 1500 also found online in the operators manuals.

So in this one story we have default passcodes that aren’t changed, people who do not take security alerts seriously, people not learning from history and the possibility of innocent kids running afoul of the law. Of course all of this has happened before and unfortunately all of this will happen again.

Everybody must get stoned

Apparently FBI Director James Comey thinks that everyone in the Information Security Industry is a dope-smoking pothead who gets high just before an interview. “I have to hire a great work force to compete with those cyber criminals, and some of those kids want to smoke weed on the way to the interview,” James Comey was quoted as saying.

Of course two days later, after basically insulting most of the Information Security Industry by calling them all stoners Director Comey said his comments shouldn’t be taken seriously and that he was only trying to inject some humor.

Currently the FBI says that anyone who has used marijuana in the last three years is “not suitable for employment”. In addition you cannot have used other illegal drugs for the previous ten years. So the FBI has already recognized that marijuana is different from other ‘hard’ drugs and now they may be thinking about relaxing those standards even further. Considering that there are twenty-one states where marijuana for medical use is perfectly OK, and two states, Colorado and Washington, where marijuana is legal for recreational use it makes sense for the agency to revisit its anti-drug policy. However, specifically singling out one specific group such as Information Security professionals may not be the best way to attract applicants.

If the FBI wants to review its marijuana policy in light of the recent relaxation of laws in some states for all potential applicants regardless of job function, well that’s great. The overall sentiment towards soft drugs like marijuana is changing and employers, including the FBI, should adjust to that sentiment at the same rate as society. However, to relax standards for just one specific job type sends the wrong message.

The FBI has open head count for over two thousand recruits this year, most of those will be assigned to cyber crime units. The FBI like every other employer in the security industry is having a difficult time attracting qualified applicants for those positions. The US Army has said in the past that it wants to relax physical fitness standards for cyber warriors Relaxing standards for those applicants, as I have argued before, is not the best way to get qualified candidates and sends the wrong message to applicants or current employees who met the old standards.

This is a simple economics question of supply and demand. When the demand is high and the supply is low the price, or in this case the salary, must go up. Artificially increasing the supply by lowering standards helps no one. If the FBI wants to lower standards to increase the pool of applicants how about it take a look at some of the other things that will automatically disqualify job candidates for employment with the FBI. If you failed to register for the selective service, guess what? No FBI job for you, same with defaulting on a government insured student loan. I have to think that the number of qualified candidate who have defaulted on a student loan and or did not register with the Selective Service is probably several times greater than those who light up a joint just before an interview. If the FBI is serious about increasing its applicant pool perhaps it should reexamine those restrictions as well.

The FBI and other government agencies have a lot of strikes against them when attempting to attract highly qualified applicants. Things like a strict dress code, initial assignments to small offices, and government bureaucracy don’t help at all. However, the FBI does have things that other employers can’t offer like an amazing benefits package, stable employment that isn’t subject to market forces and of course the fact that they are the government. There is a distinct subset of people that look at employment in the government and in law enforcement as an attractive option. Perhaps the FBI and other agencies should play up these strengths when recruiting as opposed to reducing standards.

But seriously are people really getting high before interviews, especially at the FBI, as Director Comey even humorously suggests? If someone showed up drunk to an interview I wouldn’t hire them either, let alone if they were stoned out of their mind. I am sure there is some drug use in the Information Security Industry just like there is with the rest of the population but to suggest that infosec people are a bunch of reefer toking stoners who are getting high so much they can’t sober up enough for an interview tells me they aren’t very familiar with the industry they are trying to recruit from.

Is it time for an industry wide MAPP program?

As you might suspect, the bad guys have much better exploit notification than the good guys. While there is no central repository of vulnerability information that is only released to the good guys, Microsoft does an excellent job with early notification of its vulnerability information via its MAPP (Microsoft Active Protections Program). Should there be something similar setup for all security bugs on an industry wide basis?

On the surface it sounds like a great idea. Information about critical bugs like HeartBleed could be shared with trusted and vetted members early before the information was made publicly available and the bad guys could take advantage of it. This gives those trusted members time to fix the problem before the bad guys could develop new attacks and take advantage of the flaws.

This is how MAPP works. Microsoft has very strict guidelines on who can and cannot be included in the program and if you are found to be leaking information before the specified release date you are ejected from the program. Microsoft historically only granted a few days notice to its trusted MAPP partners of the upcoming Patch Tuesday bugs but have recently expanded this length of time to give vendors more time to develop protections for their products before the bad guys can reverse engineer the patches and develop exploits for those bugs.

This all works for Microsoft because they are in control of their information, the number of members in MAPP is kept small and each much conform to strict guidelines to protect the information Microsoft provides. But on an industry-wide scale this model falls apart. A prime example of the chaos that can surround a critical bug disclosure is the mess surrounding the disclosure of the HeartBleed bug. If you look at the timeline composed by the Sidney Morning Herald it is evident that attempting to keep the disclosure process simple and organized on an industry wide level is anything but simple. The process is fraught with non-disclosure agreements, employee leaks and covert secrecy, definitely not a process that should be trusted with critical software vulnerabilities.

The first issue of an industry wide MAPP style program would be who would run it? Is this a task for the US government? What about bugs found outside the United States? How would you keep the NSA or other agencies from attempting to horde a critical flaw and add it to their weapons stockpile? While an independent international third party could run such a program how would it be funded? You could charge a fee to trusted members but then you introduce the possibility of someone buying their way in even though they shouldn’t be trusted. Not to mention the ethical debate that would arise from ‘selling’ vulnerability information.

Then there is the matter of deciding who can be trusted with handling such information early. As with any secret the more people you tell the harder it is to keep secret and as a the heartbleed timeline shows some people may leak information to their friends and employers or bad guys before a public announcement. Membership should be limited to prevent the circle from getting to large but who decides who is in and who isn’t?

Of course all this completely ignores the actions of the rogue researcher who is free to do whatever they want with their research. There is nothing stopping them from publishing such information publicly, telling a small group of people, selling it to the highest bidder or hording it for their own uses and telling no one.

An industry wide MAPP program sounds good at first but due to governance issues, international politics, and of course money, it would be difficult to keep together, keep the information out of the hands of the bad guys, and probably just create way to much drama and infighting inside the industry. Even if you were able to solve all those problems there will still be the one person who decides they don’t want to play by the rules and will do what they want.

Another BIG hack that wasn’t

No time to do a full analysis but the basics are a story out of Israel of a tunnel that was hit by a sophisticated cyber attack that caused a… traffic jam. The story went out on the Associated Press newswire on a Sunday afternoon so by Monday morning it was pretty much everywhere you looked.

The “attack” was supposedly a “classified matter” involving “a Trojan horse attack” that targeted the security camera system in the Carmel Tunnels toll road on Sept. 8. The attack caused an immediate 20-minute lockdown of the roadway and then an eight hour shutdown the next day causing a pretty big traffic jam. Supposedly the attack was the work of “unknown, sophisticated hackers” which were then compared to Anonymous but not sophisticated enough to be nation state funded attackers from Iran.

Even just by reading this it sounds like a run of the mill malware infestation and not some targeted sophisticated state sponsored cyber attack. I mean why would anyone specifically target a tunnel? There is no money there, no intellectual property to be stolen, so unless your goal is to create an isolated traffic jam, whats the point? But there is more. The tunnel operators, CarmelTun, issued a statement saying Nope, no cyber attack here. And blamed the traffic jam on a “an internal component malfunction” and went on to say “this was not a hacker attack.”

@snd_wagenseil @4Dgifts @WeldPond more than one source confirmed.

— Daniel Estrin (@DanielEstrin) October 28, 2013

According to @DanielEstrin whose name is on the byline of the story, more than one source confirmed this Trojan Horse attack story and yet he did not bother to confirm with the people most likely to know, the actual operators of the tunnel.

So we can either believe the unnamed “cybersecurity experts” who warned of a sophisticated “Trojan horse attack” that was compared to Anonymous and was conducted for no monetary gain or intelectual property theft or we can believe the operators of the actual tunnel system itself. Who has more to gain here?

Late Update:
Looks like I am not the only one to think this might not have been a cyber attack.
“Cyberattack Against Israeli Highway System? Maybe Not”