Quality of software, software processes and the UML

December 4, 2002  |  Brian Sam-Bodden
13 Comment(s)

Do you have any commentaries on the impact of the Unified Modeling Language?. Do you believe that software is best represented visually? Do you think that we can expect software quality to increase and development time to decrease as the UML usage increases?. Do yo have any opinions on the graphic constructs used in the UML? Do you see anything major missing?.

In my own personal experiences trying to educate others on software modeling techniques I have found that most programmers initially embrace modeling as an initial analysis of design effort, but once the have any code in place (either manually generated or engineered from the models) they never look back at the models.

Is it that the models’ expressiveness is not adecuate for what programmers want to express?.

I would love to hear your opinions.

Sincerely,
Brian

Comments
  • Scott Zetlan says:

    Models are great for planning a software design project, but they tend to oversimplify the challenges that arise when actually trying to implement someone’s great idea. I often use models of some sort or another — entity relationship diagrams to implement a database, data flow diagrams to illustrate process concepts, etc. — but when it comes to actually writing good code, such models generally fall short.

    UML, data flow diagrams, entity relationship diagrams and the like all show the “what,” but rarely illustrate the “how.”

    As an example — I have at my desk a diagram showing data (an arrow) flowing into the system (a circle) and being rather magically transformed into output (another arrow). I have further decomposed the processes, data flows, and data stores in subsequent models. The actual details of how the data is transferred, however, only appear in the code. I wrote the code that transfers a file using FTP over a TCP socket connection, and all those lines of code are represented with a single arrow on the diagram.

    Were I to try to represent the individual details of how a file is broken up into 1024-byte chunks and sent as packets of data across a socket connection over the internet to a host, the diagram would become so cumbersome as to obscure, rather than reveal, the basic truth I’m trying to display in such a diagram.

    On the other hand, detailed flow charts (which show the individual steps taken in an algorithm) and structure charts (which show the hierarchy of function calls and parameters passed between functions) can be useful when planning a software project. The time required to later maintain such diagrams, however, generally exceeds the benefits of making immediately such changes as are necessary to implement new ideas or work around obstacles. Thus the models tend to fall by the wayside during the coding process.

    That said, ignoring during the development, implementation, and maintenance phases the concepts expressed during the design cycle (whether diagrammed or not!) is too fatal to a project to justify how common that mistake is made.

  • Edward Tufte says:

    All I know about software development is from Fred Brooks, The Mythical Man-Month.

    This book by Ron Baecker and Aaron Marcus seems to me to be a notable contribution on how design arrangements might help programming: Baecker, R.M., Marcus, A., Human Factors and Typography for More Readable Programs, ACM Press, 1990.

    These 2 books deal with the issues internal to software development.
    What should guide that process?

    I believe the interface should be designed FIRST, by people who deeply understand the specific content and the specific analytic tasks that the interface screens are supposed to help with. Screen after screen should be specified in intense detail by content experts,independently and without reference to how those screens might be created.

    Only then do we turn to the technical implementation, which becomes simply a by-product of the interface screens and interface activites. The interface design, the content design should drive the entire development process. Thus the lead managers for development of a project management program, for example, would be people who actually manage projects and who teach courses in project management. Too often, the available software system drives the design, rather than the content/analysis needs of the user.

    There are a lot of software solutions around desperately looking for some kind of problem to solve–that is, inside-out design. But better tools for users will be more likely the product of outside-in design which make the content-substance and analytical tasks of the user the driving priority. Doing good outside-in design probably requires a thorough-going independence in specifying the interface; that is, the interface should be content-specified by people completely independent of the software development process. If not, the content-specification will be governed and distorted by the needs of the already-existing software.

    Content-driven design requires a radical shift in power and control. The Vice-President for Programming reports to the Senior Vice President for Content!

  • Scott Zetlan says:

    What about software with no human user in mind? Many large-scale software projects involve few or no people at all; they simply connect many previously unconnected data storage or data transfer systems.

    As a concrete example: I once worked on a project involving several different banks’ mortgage processing systems. The only human-facing portion of this system was the web interface which allowed people to deliver data about themselves and search/apply online for home mortgages. Most of the programming involved transmitting data to and from the web server and the bank systems.

    On this particular system, the “model” for the visual interface was a series of screen mock-ups done in Photoshop or some similar tool. Annotations showed where links/buttons triggered back-end events. The user interface model was about the size of an unfolded page of the NY Times.

    The model for the back end systems filled, from top to bottom and left to right, the wall of a medium-sized conference room.

  • Thorlakur Ludviksson says:

    I heartily recommend Alan Cooper’s (cooper.com) book “The Inmates are
    Running the Asylum” for information on software interaction-design. He also
    wrote another book “About Face” which is more programmer orientated (I’m
    not one).

  • Mathew Lodge says:

    UML and the like are useful because they force programmers to think about and design the interfaces to their code up front. Although the notions of encapsulation, abstraction and data hiding have been around for 20 years or more, it is staggering how much code ignores them. Also, for systems that don’t have a user interface, UML becomes a way to formally specify the interfaces to the other software systems.

    However, this is where UML’s usefulness ends, since it can say nothing about the actual implementation — i.e. the semantics of the code. And semantics are critical to software correctness.

    The reason the UML gets left behind while the code morphs into something else is that changes made during coding are hard to reflect back into UML. Most of the tools out there can automatically generate code templates from UML, but have a very hard time doing the reverse — which is where it all breaks down. It then becomes a burden to keep the UML updated to match the code.

    While I think Tufte’s point about starting with the UI and working backwards from there is a good rule of thumb, in practice you can only go so far with this technique before you have to write some code that actually does something. It’s very hard to flush out all the kinks in a user interface that doesn’t do anything yet, since you can’t complete any of the tasks it is designed to facilitate. In addition, you also can’t see any real data within the application, which might dramatically alter the usefulness of the interface. A good example here was some software I was involved in that executed orders. Without any real order data in the GUI, it looked OK. But when the system was built and real order data was entered… it became obvious the order displays had to be re-worked.

    Cheers,

    Mathew

  • Edward Tufte says:

    Good point, and helpful that the interface design is driving the project.

    I’m also talking about an attitude of interface-content primacy as well as the giving of authority and responsibility for the product to the interface-content side.

  • Branimir Dolicki says:

    The issue raised by Mathew Lodge can be solved by iterative development, especially if iterations are kept short. That way interface designers can quickly see how their designs work in real life and adapt them in the next iteration, without surrendering control to programmers.

  • Michel Hardy-Vall¿¿e says:

    A friend of mine who is a software engineer offered to me in a discussion a diverging view
    on the idea of interface first, code follows. The problem with that approach is that you are
    not creating a solid underlying software architecture, but rather following the fancy of the
    customer. This creates long-term maintenance problems as the needs of the customer
    evolves: any new change to the software will bring a slew of hard to implement changes,
    and will contribute to the code instability.

    A more adequate answer to the problem of software conception (architecture) proposed by
    the schools of software engineering (and here in Canada it has become a profession with
    the same legal implications like all the other engineering ones) is to stabilize the _code’s_
    interfaces, i.e. that you should define, and stick to, the way the different components of
    your software should communicate with one another.

    The internal architecture of software should take precedence over the GUI, the way the
    data is stored, the type of programming language used, etc, because it enforces a specific
    contract. This contract can then be fulfilled in different ways, or extended, but it ensures
    that what you developped first will remain stable. Working backward from the GUI does
    not bring one toward any kind of well-thought architecture, but rather to an ad hoc
    solution.

  • Martin Ternouth says:

    I think there is an interesting parallel here with building and construction: a building is
    after all only a three-dimensional GUI. The user wants something that will allow specific
    functions to be performed within it: medicine, musical performances or teaching. The
    architect wants the appearance of the building to reflect an aesthetic; and the civil,
    mechanical and electronic engineers want something that will not fall down, and will
    deliver water and electric services that can be maintained. And the user’s accountants
    want it all done for one-dollar-fifty a square yard.

    But it is the user’s requirements that should be paramount. I suspect that the reason that
    that software design still plays such a major part here is that it is not such a mature
    science as plumbing and electrical engineering.

  • Scott Zetlan says:

    That’s an interesting divergent view, but I still agree with the code-follows-interface methodology. The problem of evolving user requirements is softened somewhat when good designers force their users to think about what they really need. Good interface designers will work with both the programmers and customers to design an interface that meets the customers’ true needs while at the same time following a scheme that can, in fact, be programmed and maintained. Good programmers can break down virtually any visual interface into a web of related entities/objects, and then write extensible, re-usable code to implement those objects.

    One doesn’t present a solution without a problem. One doesn’t write code without requirements. In end-user software design, the basic requirements always come from the person who will use the software. Hence the interface must come first, or else you risk wasting effort (with code the user will never traverse) or worse, writing code that will have to be rewritten.

  • Tom Snider-Lotz says:

    I don’t see why it’s necessary to choose between solid software architecture and designing software to address the customer’s desires. As I understand it, agile software development, which employs a feature-driven approach and extensive collaboration between the customer and the project team, is able to deliver both well-written code and the features the customer wants.

    If the customer’s changes in requested features are so extreme as to require unsound code, that indicates a problem with the relationship between the contractor and the customer.

  • Gordon Fuller says:

    I would suggest, in addition to complexity, the underlying problem is that coders and business processes have become increasingly separated, and a detailed/formalized language is the only way to bridge the gap.

    Essentially, today’s web-based environment is a return to the centralized programming control green-screen environment of the mainframe era. “Thin clients” have barely more control over the display of data than the old 3270 terminals. Control over both appearance and function has reverted to central IT (though that may now be located in Asia!). Sure, you get drop-down boxes now – but essentially you’re back to block-mode processing when you click the “submit” button on the browser page.

    Client-server was a brief foray into giving local users control over appearance and data, with tools such as Visual Basic and PowerBuilder that allowed “super users” to become wildly creative. Leaving aside the maintenance hassles of n-tier architecture, users could pull their own data and bring it to life (hopefully in a Tufte-like display).

    But coding in the Web environment has passed out of the hands of amateurs back into server farms, and you can’t study both the intricacies of Java and the business drivers fostering the need for a new system. Plus, as outsourcing has grown, the programmers are thousands of miles from the users and can’t pick up the nuances of how an application might best work.

    One good solution, from a contributor above, is to have frequent iterative milestones (conference room pilots [CRP] is one common term) where the users can get a flavor for how the functions they seek have been implemented. But how do you communicate the changes in detailed, accurate and reproducible terms, rather than saying “this number should come up instead of that number.” Hence UML.

    I agree that the complexity of UML can lead to “paralysis by analysis.” Unfortunately, the diagrams must remain at a high level of abstraction, otherwise they’d look like the bowl of spaghetti diagrams that we used to see in well-meaning Entity Relationship diagrams that tried to capture every enterprise parent-child relationship on a single piece of paper. Some of the BPM tools coming out might (by 2007?) do a good job of converting the UML into code as well as diagrams, but again they err if they try to add too much. The developer just wants to translate the functions and constraints into code as quickly as possible – the diagram merely serves as basic validation of the process for the analyst.

    Similar to the other threads in this forum on project management software, the initial question is whether UML makes for a better project, and is there some way to improve the visual display to increase the odds on a successful implementation? Personally, I don’t think the visuals can help, even if improved, for the reasons I’ve given above (complexity, detail drowning comprehension). UML models can certainly help a project, but they have to be used in conjunction with a strong change management process. The client and developer have to make trade-offs: is it worth re-doing 75% of the models for a 2-week coding effort near the end of the project? It’s a rare bid that budgets analysts for the tail end of the construction and deployment phases.

  • LeMel says:

    I would agree about the hardship of updating the model to match the evolution of the rendered code. Unless the project is quite small, or the model is very simple, this can get out synch quickly. The tools don’t appear to be up to easy round-tripping.

    On the user interface points, Donald Norman’s discussion on ‘mapping’ (cleanly separating how-it-looks-to-the-user from what-is-happening under-the-hood) makes good reading on this topic in his “The Design of Everyday Things.”

    Discount methods exist for testing user interfaces without using computers. In fact, there may be a case to be made that the presence of the computer stops the user from giving good feedback on the interface. See Carolyn Snyder’s 1996 “Using paper prototypes to manage risk.” http://paperprototyping.com

    As someone who produces both UI art and UI code for an application development team, there is a middleware space where the code for user interface functionality should exist, in my opinion, and it should speak both the language of the underlying architecture and the human being on the other side. And only people interacting with users (via testing, heuristic evaluation, field study) should be allowed to make any adjustments in that space.

Contribute

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.