Monday, May 9, 2011

My Beef with Databases (Part 3)

This is the second in a series of blog posts about my beef with relational databases today, and some musings on how we could make databases better.

I have a bone to pick with the databases of today. It has developed because of the frustrations I've had in writing, deploying, and interacting with databases over the years. In fact, I have three specific issues I want to talk about, and perhaps, come up with a few ideas to make databases great.

Last time, I described my issues with the differences between the data definition between my application and database. So let's talk about my third biggest beef with databases as they exist today.

SQL is a human language used as a communication protocol

Before we go any further, let's make sure we're talking about the same thing. I am NOT talking about relationship databases, I'm fond of those. I am talking about the actual SQL language. You know, "select * from table;". And even then, I don't have a problem with the language itself. It was designed to be human readable first and it just happened to become something of a data exchange language.

I do, however, very much have an issue using it as a communication protocol between two computers. It's clearly not designed for that since it's structure is more linguistic than data structure. For example, you can't leave extra comma's after lists, or leaving an AND at the end of the where clause is invalid, or even change the order of the clauses, which by itself would simplify a lot of code. Just for starters, why isn't "SELECT user_id, username, password, WHERE username='atrodo' AND password LIKE '12345' AND FROM user;' acceptable? It is unambiguous and is far easier for programs to generate.

And that's the root of the problem. We're using programs to generate text that was meant to be written by humans in order to talk to another computer. It's not forgiving, it's not loosely defined, and it's not a data structure.  It's difficult to produce and difficult to parse, and most importantly, is as much of an API as cobol is.

Put another way, are we using Cobol text to create web APIs? Do we exchange ssl keys using C? Are http request headers a perl program?

Less and less SQL inside an application is being hand written. More and more SQL is being generated by ORMs or SQL abstraction layers. And these tools are rarely simple. Quite the opposite actually, they are normally much more complex than they really need to be. And in my experience these SQL generators are used to generate the SQL that works instead of really expressing what data is really wanted.

So then the question I ask myself is, if not SQL, then what? Is XML or JSON really a better option? Absolutely. Both were designed as mappings to real application data structures. In the case of JSON, it is a data structure. They are meant to be a data exchange format first and happen to be somewhat human readable.

Why do we continue to use SQL as a database API?

Friday, May 6, 2011

YouTube "Http/1.1 Service Unavailable"

Can anyone tell me why, right now YouTube is "Http/1.1 Service Unavailable" when I try to do anything with my account on the site?  Why does this even happen?

I thought the cloud was suppose to save us from this.  One node fails?  That's cool, we'll route to someone else. From my searches, it sounds like this is not an uncommon occurrence.  And has been happening for years.  This sounds like a lack of redundancy and a single point of failure.  This is exactly what Google assures us they do not have.

And yet, here I am, unable to access portions of YouTube with no explanation.  Nothing on the website, nothing on twitter, no signs at all of what's happening.  If this is what the cloud gives us, then it sounds like it's more cloud than computing.

My Beef with Databases (Part 2)

This is the second in a series of blog posts about my beef with relational databases today, and some musings on how we could make databases better.

I have a bone to pick with the databases of today. It has developed because of the frustrations I've had in writing, deploying, and interacting with databases over the years. In fact, I have three specific issues I want to talk about, and perhaps, come up with a few ideas to make databases great.

Last time, I described my number one issue, the code separation that databases enforce. That is the biggest issue primarily because the other two are related. So let's review my second biggest beef with databases as they exist today.

My data definition is not in my repository

Which, to a point, isn't completely accurate assessment of the situation. The definition of my data is in two places. Two very different, very disjointed places. And there is no mechanism for the two to know that the definition is different. This issue is exaggerated in the presence of ORMs or other tools that need to or want to understand the structure of the data in the database and depend on the programmer to describe the data definition.

Then the question becomes who is the authority on the data definition. On the theoretical side, the database is in charge of the definition and structure of the database. On the other side, the database is a tool for the application so the definition and structure should stay in the application. I'm a firm believer that is the proper case. My application should define all behaviors, structures and definitions.

When our applications connect to the database, both assumes that they are talking about the same data model. The reality is that often that is not the case. What's even worse is when the database is just subtly different, which means things will appear to happen correctly when it's not, potentially even corrupting data.

I can appreciate that this is a difficult problem in practice. That doesn't mean the problem should be ignored. Quite the opposite. My data is vital, which is why I use a database and not some mish-mash of files and /dev/null. I could accept the issue if I could use always use create scripts and not be forced to use a create script and then a series of alter scripts. Or, maybe even I could give it a create script, and it tells me what's different. That'd also be cool. But that doesn't happen. No one, to my knowledge, does that.

As I was describing this issue to my friend, he asks about object or document data stores, like what GAE uses or MongoDB. These things are cool, and to a degree they solve this issue, but in the general sense, they solve it by not enforcing data structure, making data integrity your responsibility. And that's entirely true, data integrity IS my responsibility. But I think that without the right tools, that responsibility becomes overly difficult and cumbersome. And when things are difficult, programmers are among the first to become lazy and ignore the problem. Ignore it until something goes wrong. And it will eventually go wrong.

I want to ask, why can't my application define my database?

Wednesday, May 4, 2011

My Beef with Databases (Part 1)

This is the first in a series of blog posts about my beef with relational databases today, and some musings on how we could make databases better.

I have a bone to pick with the databases of today. It has developed because of the frustrations I've had in writing, deploying, and interacting with databases over the years. In fact, I have three specific issues I want to talk about, and perhaps, come up with a few ideas to make databases great.

Before I get much farther, let's get a few points straight. First, I'm not in the NoSQL camp. Sorry if that disappoints you. Relational databases are great things. But, by the same token, the NoSQL camps have some neat ideas and there are times when going that route makes more sense than using relational databases. There are merits to both methodologies.

So let's get on with issue number one, and this is my biggest beef with databases as they exist today.

My code is not in my repository


Yes, that's right, it's as simple as that. But, what exactly does that mean? All of my code, including all of my business logic.  All of my stored procedures, views and data definitions are stored on the database. Now, I know what most people are going to say at this point, that what I'm saying is false and a good repository holds the create and alter scripts for everything and as long as the database is kept in sync with the current version of the scripts, everything works.

Except when it doesn't.

Let me take a detour for a moment. If you have Perl Best Practices laying around, it's time to crack it open because Damian Conway has some sage advice that speaks to this issue. In Chaper 13 of Perl Best Practices, Damian goes out of his way to explain why exceptions are better than returning special values, specifically:
It's human nature to "trust but not verify".
He goes on to explain that returning special values makes ignoring errors easy and dangerous. What happens when opening a file fails? In the normal case, open return a special value, in perl it's undef and in C it's -1. So, programmers have two choices. Either check for the special value and handle it, or ignore it and let the bug present itself later in some unrelated section of the code. Guess which one is easier? That's right,the obscure bug inducing option of doing nothing. Damian finishes this thought by explaining:
Don't return special error values when something goes wrong; throw an exception instead. The great advantage of exceptions is that they reverse the usual default behaviors, bringing untrapped errors to immediate and urgent attention.
So what does this have to do with databases? It illustrates why I think alter and create scripts are bad. The developer trusts that those scripts have run, and often, do nothing to verify that. Yes, developers should check, just like they should always check return codes. But they don't. And that's my point, why should they? We, as programmers, have moved away from assuming that everyone agrees that everyone is talking the same way. We have doc-types at the top of XML files since we no longer assume we're all talking HTML 3.2. We have protocol versions embedded in streams so everyone knows how to communicate. We have extendable file formats that allow older readers to read newer files.  But yet, when talking to databases, we assume that the triggers stored procedures are there.

Some clever programmers have recognized this issue and decided to completely avoid it by putting all of the business logic into the applications. I've done it before, and it's a solution.  The major flaw with that is the logic is farther from the data when the desire is to get the business logic closer to the data so the logic is always applied at the database level. That also assumes that everyone is going to use the business logic or ORM to access the data. Hint, they're not always going to.

So on the one side, we have the maintenance cost of making sure the database we are talking to is always running the correct version of our database code (which is completely separate from the rest of our code). On the other side we have to make sure all of our business logic is ran no matter who interacts with the database.

I want to ask, why do I have to make a decision?

Tuesday, February 8, 2011

jQuery/pre-wrap trick

I've had a recent encounter with Internet Explorer 7 for $DAYJOB.  For this section of code, IE7 is our minimum required browser.  I was displaying some user input and wanted to maintain the text formatting while not using <pre>, which I find ugly.  So, for a while, I was using the CSS style "white-space: pre-wrap;" which will let the text flow like html, but break at new line characters like pre.  Perfect.  This has the added benefit that the text will be editable and we have a good jQuery plugin that will take a div and turn it into a textarea and use the content of said div as the inital content.  After a while, I noticed that IE7 was not behaving.  This is because IE7 does not support the pre-wrap CSS style.  Well, that's an issue.  So after a bit of reworking, I came up with this:


$("<div/>")
  .addClass('user_input')
  .append(
    $(
      $.map(
        this.split("\n"),
        function(v, i)
        {
          return [
            $("<span/>").text(v+"\n").get(0),
            $("<br/>").get(0)
          ]
        }
      )
    )
  )

So let's break this down a little.  
  1. I create the div, and add a class to it.
  2. I begin an append of the content
  3. I then take the output from a map, and turn it into a jQuery object for the append (Yes, this is important)
  4. I use the data I've been given and split it by a new line character and use that array as the input to a jQuery map operation
  5. Please note that I used the quote form of split, not the regex form.  IE7 also removes empty array items when using the regex version.
  6. For each item from the split, I return an array.  The first is a span that includes the text we need, plus a new line, to maintain the text representation of the data.  The second is a <br/> to take place of the newline in the html
  7. Last, I return the DOM elements of the span and br.  This is important, since that's what the jQuery constructor back in step 3 expects.  Also, I must ask for element 0 since otherwise, I'll get an array instead of an element back from .get().
The best part of this whole thing, is that if I do a .text() on the div, I get exactly the text I need for the editable textarea.  Perfect!  Well, not exactly.  This may have issues with spaces being insignificant, but in our case, that's acceptable for now.

Friday, February 4, 2011

The NQP question

This weekend was a busy weekend for me, but more importantly, it was a busy weekend for Parrot.  There was a Parrot Developer's Summit (PDS), which I wasn't able to attend, and a lot of good discussion happened.  Perhaps too much since I glazed over a couple parts of the discussion.  If you're interested, the summit can be found at http://irclog.perlgeek.de/parrotsketch/2011-01-29.  Make sure to hit the next day, since it spans two days worth.

There was some Lorito discussion, which is interesting for me.  But actually, most of the discussion about Lorito was about the announcement that by Parrot 3.3, which lands in April, we will have a spec and initial implementation of Lorito.  The team, to my recollection, is going to be cotto, dukeleto, bacek and myself.  There was also come clarification between the terms M0 and Lorito.  M0 being the opcode set and bare minimum, and Lorito being the whole project.

But this blog posting isn't about Lorito.  It's about NQP.  The major discussion that took place during PDS was the suitability of the new NQP being included as a core component of Parrot.  This discussion was interesting for me since there were good points on each side.

Before I go any farther, let me explain what's happening to NQP.  pmichaud explains the change in detail at http://irclog.perlgeek.de/parrotsketch/2011-01-29#i_3232428, but the quick version of it is that NQP is going to become multi-backend.  It will support output to Parrot, JVM and CLI.  It was made clear that it may not be able to support all VM backends the same, some features that Parrot gives naturally will be difficult in JVM, for example.  But, the goal is to make Rakudo, and anyone using NQP, to be able to retarget to another platform with relative ease.  On the surface, this appears to be a noble goal and something Parrot would be interested in.  I'm going to admit, it's appealing.  But, looking deeper under the surface, and you'll find some deep issues.  The big issue is maintenance cost.  This may not seem like much, since it's the NQP team that will be maintaining the code, but as long as NQP is a core component, included with Parrot, the Parrot team has a responsibility to it.  It's like when I was at college, students always wanted the IT department to install AIM on all the lab computers, since it always gets installed anyways.  What few people seem to realize is that by doing that, IT is taking some form of responsibility for the installation.  Students are more apt to go to IT first to resolve an issue with AIM.  There's a cost for that amount of support tickets.  When AIM is installed by IT, the support is implied at that point.  Same with Parrot, if JVM support is there in the Parrot core, support for it is implied by it's existence.  In this regard, I agree, including the new NQP, alternate backends and all, is not what Parrot should do.

The issue isn't so cut and dry to say that NQP should leave the nest because it supports multiple backends.  The reality is that Parrot needs a tool like NQP.  People that want to start developing and targeting Parrot need a tool that can get them started fast.  That's also not to say that there should be one true tool to use, there should be multiple to fit people's needs.  But Parrot needs a default, and it's something that Parrot needs to support.

In the PDS aftermath, there were a few reasonable options brought up.

  1. Include the new NQP, but only include the "use parrot;" equivalent functionality.  This seems to be the option that is in favor right now.
  2. Continue to use NQP-rx, which is what is currently supported.  Personally, I don't like this option, since it means that Parrot will diverge from NQP, which will be difficult for HLL writers to transfer from NQP-rx to NQP.
  3. Use another, existing tool, something like Winxed.  Actually, the more I look at winxed, I don't doubt it could fit the bill.
  4. Create a brand new tool.  Feasible, but wouldn't happen anytime soon.  Although my fear is that it'd end up being something unpleasant to work with like PASM/PIR.  That fear is probably unfounded, but it's a concern none the less.
So, in the end, the question I ask myself is, what do I think Parrot should do?  Personally, I think option 3 and 1 are the most reasonable options, and failing that, option 4.  Option 2 just shouldn't be an option in my opinion, too many problems could and would arise from having two divergent languages.  I do think option 3 is the most desirable, since that way we can support HLL writers out of box, but they still have options.  I understand the cost to do that, and at that point, I concede that option 1 is probably, at the very least short term, the option that should be persuaded.

But that's just my opinion.