Follow Slashdot stories on Twitter


Forgot your password?
Government Medicine Bug The Almighty Buck United States IT What Went Wrong? 400

New submitter codeusirae writes "An initial round of criticism focused on how many files the browser was being forced to download just to access the site, per an article at Reuters. A thread at Reddit appeared and was filled with analyses of the code. But closer looks by others have teased out deeper, more systematic issues."
This discussion has been archived. No new comments can be posted. What Went Wrong?

Comments Filter:
  • by CheezburgerBrown . ( 3417019 ) on Saturday November 02, 2013 @04:30PM (#45313395)
    This article is dated oct 8. I had assumed it would be more recent.
    • by Anonymous Coward on Saturday November 02, 2013 @04:43PM (#45313489)
      You do know you're on Slashdot, right? An article from 10/8 is practically superluminal for these guys.
    • Re: (Score:3, Insightful)

      This article is dated oct 8. I had assumed it would be more recent.

      Obligatory: You must be new here...

      In other news, it's still a relevant and current event; Just because something's a month old doesn't mean it might as well have been written on stone tablets. I know the iThingie generation has the attention span of... oh who am I kidding, they didn't even finish reading the summary let alone the comments. :) But more seriously, it's pretty clear at this point the problem isn't because of the technology, but rather that the implimentation was divided up into two teams with

      • by Runaway1956 ( 1322357 ) on Saturday November 02, 2013 @06:04PM (#45314047) Homepage Journal

        Girl - you write some pretty smart, insightful comments from time to time. But your logic is missing a few cogs here.'s_razor []

      • by garyebickford ( 222422 ) <> on Saturday November 02, 2013 @06:17PM (#45314121)

        I recall a study from several years ago (10 years? possibly) that showed the probability of failure increased with the size (budget) of the project. Above about $5 million in then-dollars the probability was near 100%. As I recall failure was defined as either technical failure, or budget overruns going so high the project was cancelled. Of course, I have no citation. That would be too easy. :)

        However, I did search for "Probability of Software Project Failure", and got some fascinating results. This is one of them: Statistics over IT projects failure rate [] - a summary review of several of the most definitive studies over the last 20 years. And this one: website 'didn't have a chance in hell' [] notes that:

        The Standish Group, which has a database of some 50,000 development projects, looked at the outcomes of multimillion dollar development projects and ran the numbers for Computerworld.

        Of 3,555 projects from 2003 to 2012 that had labor costs of at least $10 million, only 6.4% were successful. The Standish data showed that 52% of the large projects were "challenged," meaning they were over budget, behind schedule or didn't meet user expectations. The remaining 41.4% were failures -- they were either abandoned or started anew from scratch.

        And I suppose this: £12bn NHS computer system is scrapped... and it's all YOUR money that Labour poured down the drain [] fits into this model pretty well. (Regardless of one's opinion about the Daily Mail.)

  • by Anonymous Coward on Saturday November 02, 2013 @04:32PM (#45313403)

    Doesn't it strike anyone as odd that the Govt can design and implement a billion+ dollar data storage center for the NSA but can't deploy a website to allow people to sign up for insurance?

    • Who says the storage center is implemented well?

      • by Z00L00K ( 682162 )

        It's probably a lot more well-implemented since those that are permitted to do that work at least need to pass some security checks. This means that it's not just any cheap labor that can be employed to do that work.

        But for a lot of work the cheapest bidder is mandated by law.

        • Eh.. cheap bidder, high bidder, both are going to scrimp. You have to make your requirements clear and stick to them and have good oversight. I don't think that's what happened with the health care web site, though. There just wasn't enough time to do it right. I'm not sure they had enough time to understand the requirements.

          Regardless, there's no evidence either way on the data center. They might have great system. They might have a mediocre system propped by insane hardware investment. They might h

        • But for a lot of work the cheapest bidder is mandated by law.

          Yes, because otherwise we'd simply be wasting even more money for the same poor quality. Either the government needs to develop things like in-house, with government employees, or it needs to leave it entirely to the market. Outsourcing such things doesn't work; it just amounts to massive corruption followed by massive blame shifting.

      • Re: (Score:2, Insightful)

        by mi ( 197448 )
        It is not, actually []:

        The NSA’s new data-storage center in Utah has suffered a series of mysterious meltdowns in the past year.

        Officials told [] the Wall Street Journal that 10 fiery explosions, known as arc-fault failures, have ripped apart machinery, melted metal and destroyed circuits. The repeated meltdowns have delayed the opening of the one-million square foot facility by 12 months.

        But the Anonymous GP may be right suspecting, the failure is deliberate... Obama's personal favorite healthcare model

        • Conspiracy-Theory-Fu (Score:5, Interesting)

          by theshowmecanuck ( 703852 ) on Saturday November 02, 2013 @05:53PM (#45313977) Journal

          Maybe it's the fault of libertarians that seem to make up a significant percentage of the tech demographic; wanting to kill the Affordable Healthcare Act. Or tea party programmers wanting the same thing who managed to get on the project. Come on man! Think of some more conspiracies!! Lovin' it.

          Of course it couldn't be the incompetence of contracting companies that seem to make a living because they have or aim to have some sort of inside track [] in Washington rather than the chops to do the actual thing that needs doing. Of course that would never happen in Washington or any other political capital []. I'm not saying the way the primary contractor, Quebec company CGI, does business in any way follows recent [] Quebec business [] practices []. They are probably a well above board and good honest corporate citizen (although according to the Washington Post article above they did screw up another medical system based project). I'm just saying that if Quebec ever did separate from Canada, as it is now, they'd have to think up some other adjective to describe it. It's too cold to grow bananas there.

          Frankly (and personally) though, I wouldn't trust any company to government contracts with stated aims published in their profiles like: "The ultimate aim is to establish relations so intimate with the client that decoupling becomes almost impossible," (see Washington Post article). Especially not from Quebec.

      • Who says the storage center is implemented well?

        Angela Merkel

    • by Teancum ( 67324 ) <robert_horning AT netzero DOT net> on Saturday November 02, 2013 @04:48PM (#45313515) Homepage Journal

      Doesn't it strike anyone as odd that the Govt can design and implement a billion+ dollar data storage center for the NSA but can't deploy a website to allow people to sign up for insurance?

      At least we can be comforted by the fact that the NSA data center is likely operated at the same levels of efficiency and competency.

    • by geoskd ( 321194 ) on Saturday November 02, 2013 @04:49PM (#45313523)

      Doesn't it strike anyone as odd that the Govt can design and implement a billion+ dollar data storage center for the NSA but can't deploy a website to allow people to sign up for insurance?

      That's because one was designed by a bunch of guys on a mission, with an exceptionally strong feeling of patriotism and righteousness with practically no oversight by congress. The other was done by the lowest bidders (largely not even American citizens), built on a framework that was made practically impossible to implement by a meddlesome and conflicted congress.

    • Who's to say the NSA didn't also have issues of the same scale?
    • by khasim ( 1285 )

      Doesn't it strike anyone as odd that the Govt can design and implement a billion+ dollar data storage center for the NSA but can't deploy a website to allow people to sign up for insurance?

      Nope. Because it is always possible to spend MORE money on a project in an attempt to get X results.

      The trick is to get X results with the lowest cost. Someone who spends $1,000 on a loaf of bread may not be the best person so send grocery shopping. And that loaf of bread may not be worth $1,000. And when the project was

      • For example, the TSA has a huge annual budget. Yet they've never caught a single terrorist.

        The purpose of the TSA is to get Americans (even more) used to the idea that government agents can search you whenever they deem it necessary, without a warrant. Sure, a long time ago some old white men wrote a 4th Amendment saying they can't do that, yeah sure, but by stepping into the airport you automatically agreed to waive your inalienable right, EULA-style. So you see it's all legitimate and there's nothing to see here.

    • The Govt can't design and implement a billion+ dollar data storage center. It can hire people to do it for them. Badly. []

    • by pjt33 ( 739471 )

      I bet that the requirements document for that data storage centre was considerably shorter than the requirements for, and there was probably more input from the people tasked with building it into how long it would take and what was a reasonable deadline.

    • Contractor fraud and abuse is the problem.
    • Re: (Score:3, Interesting)

      by thunderclap ( 972782 )
      You mean the one in utah? [] Meltdowns Hobble NSA Data Center [] [] 10 fiery explosions, known as arc-fault failures, have ripped apart machinery, melted metal and destroyed circuits. No because the govt couldn't design and implement a billion dollar data storage
  • So basically (Score:2, Insightful)

    by Horshu ( 2754893 )
    The web site turned out like every other v1 web app that gets rushed out to an externally-set deadline?
    • It's about profit model.

      The problems that were reported as "problems with the website" were either standard IT issues (no excuse, but no need to exaggerate) solvable with routine IT engineering work or they were problems inherent in the profit model of the insurance companies.

      Health care is like clean water, plumbing, or is something virtually every American would want or need.

      The very definition of government is to group our resources...and any time humans group for any is to somehow

  • by rs79 ( 71822 ) <> on Saturday November 02, 2013 @04:55PM (#45313567) Homepage

    It was slow to load, I couldn't sign up, my browser hung waiting on lost connections with the too many other files it was trying to download and there seem to be server sync problems with the back end databases.

    In other words it acts like PayPal, Google, Facebook and Slashdot.

  • Systemic (Score:4, Informative)

    by ErnoWindt ( 301103 ) on Saturday November 02, 2013 @04:56PM (#45313569)

    Not "systematic."

  • is merely a distraction from Obamacare, also Patient Protection and Affordable Care Act. Sooner or later the website will be fixed and many will think that the mission has been accomplished. It is obvious that Affordable Care Act is not really Affordable for the middle class, it is merely a new additional tax for most of the working people, who were mostly silent through the process. Affordable Care Act does little to employ free market principles and to combat the true problem: HealthCare
    • Re:A distraction (Score:4, Insightful)

      by game kid ( 805301 ) on Saturday November 02, 2013 @05:31PM (#45313815) Homepage

      "free market principles" won't help here. On the contrary, just think of the money that would go into actual health care if the government came in guns-ablaze and forcefully said "no, United Health Care, you can't treat your customers like the deepest turd of a batch of untreated sewer sludge []", or "no, big drugmaker, you can't throw millions of dollars on advertising niche products like fucking Restasis all over primetime tv instead of putting the money toward cutting the costs of life-saving meds".

      Those are two cases where I'd actually be elated to see the NSA and TSA put into use: snoop on the moneyed fuckers involved and No-Fly 'em as soon as it's clear they want to take anything that resembles a business trip to plan their next splurge.

      • Well thats one point of the law I like. It does require some minimum percentage of revenues be used to pay for healthcare.
  • by linebackn ( 131821 ) on Saturday November 02, 2013 @05:09PM (#45313653)

    Regardless of "what went wrong", you know that the higher ups will just fire some peons, give themselves some big bonuses, and call it a day.

    But the BIGGER question I don't see anybody asking, is why is there no apparent fall back or concession to delay requirements due to the problems? ANY significantly complicated computer system can reasonably be expected to encounter problems at deployment. And despite what the talking, drooling, blathering heads on TV seem to think, it is simply IMPOSSIBLE to test a system like this 100.000000000000% against real world scenarios. There will be glitches, there will be people who can't use the systems, there will be all sorts of "people problems" that no technology can fix. They should have been ready with other non webby ways to get people taken care of, and prepared to delay the needs for all of this if they could not get everyone taken care of in time.

  • by Ukab the Great ( 87152 ) on Saturday November 02, 2013 @05:13PM (#45313683)

    It's hard enough to work with one spotty vendor, let alone 55. That number, 55, represents somewhere between 55 and 55-squared lines of possibly iffy communication between possibly iffy organizations. When I first heard that had 55 contractors working on it, I was surprised that the damn thing ran at all.

  • by Gravis Zero ( 934156 ) on Saturday November 02, 2013 @05:31PM (#45313813)

    it's simple: they didn't do enough testing and bug fixing. there should have been at least 6 months of testing and debugging to get this system working well. the information i found was that 248 people were able to sign up on the first day. so it works... kind of. there were bugs like spouses sometimes ending up being filed as children.

    it's obviously a complex system but i take the 80m lines of code number with a grain of salt because i'm sure that includes all the libraries they (re)used too and maybe even an entire JVM. as such, it's probably all in house crap for each and every contractor, 55 if i remember correctly. there was obviously lazy coding involved to get that much bloat. there could be a swath of libs included that arent even used but were thrown in there "just in case i need it".

    i hope the companies helping them gut the use of most proprietary libs because they are an easy way to get terrible bugs and gaping security holes. i also hope they move to a unified OO language to get a handle on this feral system. however, if i find out that google convinced them to rewrote it all in Go, i'll just cry.

  • Splat Programming (Score:5, Insightful)

    by wdhowellsr ( 530924 ) on Saturday November 02, 2013 @05:34PM (#45313829)
    The ObamaCare web site is an example of Splat Programming. What is Splat Programming? Cut and paste from every where, run once and move on if it appears to even marginally work, and don't think very long about method or variable names. The most important part about Splat Programming is that you don't try to combine css or js files but rather just reference them individually via CDN and only change function name or variables that conflict. Most importantly, do not do any loading, scaling or security testing especially if you know that the test will fail.

    The other part is Government Projects. You don't have to worry about errors and omissions because the standard government contracts do not hold the contractor liable if the final result is approved. Finally, unlike commercial projects, there is an infinite amount of money available to pay for years of bug fixes and upgrades.

    Thankfully this site only effects a small percentage of people so there is really no cause for alarm.:)
  • by epyT-R ( 613989 ) on Saturday November 02, 2013 @05:38PM (#45313857)

    Typical of what happens when an organization is too used to spending other people's money. It's ike a 16yo girl's runaway spending habits with daddy's credit card...and she's got him by the balls, too, along with her mother.

    • Re: (Score:3, Insightful)

      by plopez ( 54068 )

      The number of large failed private sector IT projects makes this look like a drop in the bucket.

  • What went wrong? (Score:5, Insightful)

    by Chas ( 5144 ) on Saturday November 02, 2013 @05:47PM (#45313927) Homepage Journal

    First and worst, politicians were involved. Everything else pretty much is a cascade effect off that.
    Second, cronyism.
    Third, you had a bunch of non-technical people setting up moving goalposts for the technical people to hit, with regard to the technical specs of the site.
    Fourth, distinct lack of firm, single-message communication to the technical teams with regards to whether the project was or was not going forward.

    I could go on and on about all the fuckups with regard to this. But I'd just piss off a bunch of people who aren't worth my time.

  • by subreality ( 157447 ) on Saturday November 02, 2013 @06:38PM (#45314231)

    What went wrong is we created a system which requires extensive paperwork for insurance. It should have been a web form that asks "Are you a US citizen?" and if you answer yes, it says "OK, you're covered."

    You can make the system (not just the web site) even more efficient by eliminating that question and simply serving static HTML.

  • by bfwebster ( 90513 ) on Saturday November 02, 2013 @06:51PM (#45314311) Homepage

    Quote 1: "A complex system that works is found to have invariably evolved from a simple system that worked. . . .A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system." (John Gall, Systemantics,p. 80, 1978 paperback edition).

    Quote 2: "In architecting a new [software] program all the serious mistakes are made in the first day." (Martin, 1988, cited in Maier & Rechtin, The Art of Systems Architecting (3rd ed.), p. 399)

    Quote 3: "Indeed, when asked why so many IT projects go wrong in spite of all we know, one could simply cite the seven deadly sins: avarice, sloth, envy, gluttony, wrath, lust, and pride. It is as good an answer as any and more accurate than most." (me, testifying before the Subcommittee on Government Management, Information, and Technology Hearing, US House of Representatives, June 22, 1998)

    My pre- and post-launch analysis of the website can be found here []. ..bruce..

  • Government (Score:4, Informative)

    by Maudib ( 223520 ) on Saturday November 02, 2013 @06:57PM (#45314337)

    What went wrong? Government.

    The ACA has some great theory behind it. Assuming that the federal government will be able to operate and maintain a system like this in a cost effective fashion is lunacy. It as bound to fail.

    Also don't tell me it was Republican "starve the beast" strategy. The ACA was fully funded and largely untouchable. By any reasonable standard the roughly $400m spent on implementing this was incredibly excessive. If a private company had wanted to build this system for profit, it would have been done for under $100m. The big mistake of the ACA was that it did not allow for the creation of privately run and owned exchanges.

    • web sites (Score:4, Insightful)

      by junkgoof ( 607894 ) on Saturday November 02, 2013 @08:10PM (#45314773)
      Why does everyone think making a web site is easy? With multiple feeds using different technologies even a fairly minimal health care web site would be complicated. Add in a whole lot of states that oppose the process and delay finalizing the requirements (client from hell) and you can pretty easily get to a point where the implementers have to choose between being late and being wrong. Think of the length of the requirements document distilled from the laws and negotiations. Think of the army of business analysts needed to get functional requirements and of the timeline they have to meet. Remember that no one ever hires enough business analysts.

      This is not an easy thing to do.
  • by hey! ( 33014 ) on Saturday November 02, 2013 @10:14PM (#45315367) Homepage Journal

    Use your special system architecture x-ray vision, folks. This is not simple, stand-alone site like Slashdot that just has to do some database queries and generate some XML, then uses JQuery or something to asynchronously load some advertising into a DIV. This is a system that must orchestrate a complex *synchronous* process involving servers that belong to outside organizations.

    Case in point; the system requirements say that the site must exclude illegal immigrants, so the system has to request and obtain proof of your status from Homeland Security's servers before it can proceed. Also, instead of issuing the same subsidy to everyone, the law specifies and income dependent, means-tested subsidy, which means the system ALSO has to check your claims against the IRS's computers before continuing. That's before it actually gets to obtaining the marketplace data.

    So the most complex aspect of this system is essentially untestable short of a near-full scale roll-out. Hey, IRS, can I try hosing down your servers with JMeter? Even if you could orchestrate the non-functional testing you'd want to do, you won't know how the system works until it's handling real data. It's not like you can shove a test load equivalent to a thousand applications per hour, then another equivalent to ten-thousand, then draw a straight line that will tell you how the system will perform with twenty-thousand. There are some serious discontinuities in performance lurking, and the actual data submitted is likely to change things.

    I think if I were in charge of this, the extreme difficulty of realistic non-functional testing might have led me to isolate some of the data interchange into a post-processing step. That is, I'd let people apply and take them at their word about their immigration status and income, then tell them to check back in a day while we confirm the data they submitted. It's more bureaucratic, but a big part of user experience is predictability. If someone knows they can complete their application in half an hour and come back 24 hours later for confirmation, it's not so bad. But if the system is designed to give them the expectation that they can finish in a half hour, but sometimes takes so long their sessions expire, that's a disaster.

New systems generate new problems.