Wordyard

Hand-forged posts since 2002

Scott Rosenberg

  • About
  • Greatest hits

Archives

Wordyard / Code Reads / Code Reads #5: Knuth’s “Structured Programming with go to Statements”

Code Reads #5: Knuth’s “Structured Programming with go to Statements”

November 17, 2006 by Scott Rosenberg 11 Comments

Code ReadsThis is the fifth edition of Code Reads, a weekly discussion of some of the central essays, documents and texts in the history of software. You can go straight to the comments and post something if you like. Here’s the full Code Reads archive.

I have felt for a long time that a talent for programming consists largely of the ability to switch readily from microscopic to macroscopic views of things, i.e., to change levels of abstraction fluently.
— Donald Knuth, “Structured Programming with go to Statements”

We’ve been looking at Edsger Dijkstra’s principles of structured programming for some time now. Today we’ll conclude that phase of this series with a look at Donald Knuth’s “Structured Programming with go to Statements” (1974). Since “Go To Statement Considered Harmful” was the most famous declaration of the structured programming ethos, Knuth’s title itself was a kind of provocative joke — like naming a paper “Judaism with Idols” or “Vegetarianism with Pork Chops.” And that combination of playfulness and attentive humor extends throughout the article, from the epigraphic references to Bob Dylan (and a laxative advertisement!) on. In the middle of a complex analysis of an algorithm in ALGOL, in which Knuth is reviewing ideas for new ways to notate a particular kind of loop, he interjects:

Readers who remember 1967 will also appreciate [this] second suggestion,
turn on begin S; when B drop out; T; end.

The levity’s a good thing, because, I confess, as a non-mathematician and only a rudimentary programmer I approached Knuth’s formula-laden text with trepidation. And there remain parts of “Structured Programming with go to Statements” that are beyond me. But for the most part, this text bears out Knuth’s reputation for brilliant clarity — it’s surprisingly accessible.

Knuth walks us through the history of the “go to” controversy and explores cases where eliminating “go to” doesn’t really work — particularly in efforts to optimize the efficiency of little loops of code that a program must frequently repeat. A short way into the discussion, Knuth admits his own “vested interest in go to statements”: They are a basic building block of the assembler code he uses for examples throughout his famous series of books on The Art of Computer Programming. Still, he’s fundamentally sympathetic to Dijkstra’s argument about structured programming; he just insists that the “go to” issue is a symptom, not the heart of the problem:

There has been far too much emphasis on go to elimination instead of on the really important issues; people have a natural tendency to set up an easily understood quantitative goal like the abolition of jumps, instead of working directly for a qualitative goal like good program structure…. Probably the worst mistake any one can make with respect to the subject of go to statements is to assume that “structured programming” is achieved byt writing programs as we always have and then eliminating the go to‘s. Most go to‘s shouldn’t be there in the first place! What we really want is to conceive of our program in such a w3ay that we rarely even think about go to statements, because the real need for them hardly ever arises. The language in which we express our ideas has a strong influence on our thought processes. Therefore, Dijkstra asks for more new language features — structures which encourage clear thinking — in order to avoid the go to‘s temptations toward complications.

I appreciated Knuth’s argument, but even more, his attention to the expressive power and comprehensibility of different ways of writing programs (“The word do as used in ALGOL has never sounded quite right to native speakers of English; it has always been rather quaint for us to say ‘do read (A[i])’ or ‘do begin‘!”). This sensitivity leads Knuth, in the latter part of the paper, towards the principle he would later expound in his advocacy of “literate programming”: “A programmer should create a program P which is readily understood and well-documented, and then he should optimize it into a program Q which is very efficient.” A go to at the second, optimized stage would be acceptable, he says, because the programmer has already expressed the program in terms that make its structure clear and allow for the removal of errors and bugs.

I don’t know how widely Knuth is read today (or ever was); his attention to assembler-level code may feel distant from the concerns of the everyday developer focused on getting some mundane job done. But it seems to me that the habits of thought you can observe at work in “Structured Programming with go to Statements” — patience, precision, a willingness to examine principles from all angles and an openness to being amused — are of interest and value to anyone, in any field.

BONUS LINK: In this anecdote from Andy Hertzfeld’s Macintosh Folklore site, Knuth calls Steve Jobs on his claim to have read “all” Knuth’s books.
[tags]code reads, donald knuth, go to, programming, software development[/tags]

Post Revisions:

There are no revisions for this post.

Filed Under: Code Reads, Dreaming in Code, Software

Comments

  1. Amos Anan

    November 17, 2006 at 3:04 pm

    Reading this post I couldn’t help but get the sense that the key theme was the significance of “microscopic to macroscopic,” which you rightly lead (lede??) with but then don’t really examine in depth.

    I’ve described programming to people that have never done it and think it’s only for nerds or eggheads or very deep thinkers (when they’re trying to be kind) as comparable to architecture, whether it be building homes or skyscrapers. I’ve also described programming to programmers as comparable to knitting. You may make beautiful sweaters with incredibly creative designs and patterns but after the hundredth sweater it’s hard not to think of the next sweater without considering the thousands of knit 1, pearl 2’s involved. Micro versus macro applies on multiple levels of micro and macro.

    Considering the time frame of the early to mid seventies, Knuth’s description of an advantage of the use of gotos was well understood. In fact C language which was created in the late ’60s had already dealt with the problem you’ve given as an example. With nested loops and conditions calling for exiting those loops from deep within the nesting, gotos are more efficient and even more elegant than multiple test flags and testing statements. So C incorporated the “break” statement to handle that problem. Redefining “goto” in a situation where its use is valuable to something more restricted and directed to the problem was the answer. An answer given years before.

    Expertise in handling macro and micro also comes with experience, in programming and in all areas where knowledge gained is knowledge valued. Apprenticeship I think it was once called. I once taught Pascal 100 or whatever beginner’s Pascal is called. I also taught a C course for students who had already had some programming experience. I was more of a free form teacher usually going into a lecture with a loose outline for the day (which I’d put on the “black board”). From there I’d let the class and lecture go where ever the interaction took it. One day in a C class I did the bubble sort. I had the class provide much of the input trying to get from them the sense of constructing an algorithm rather than looking one up. Not surprisingly, given that the base language of the school at that time was Pascal, the C sort looked like a Pascal routine. That is, it could only be used on one particular type of array. At the end of the class, when I was done, I looked at the resulting program and said, “Damn! That’s a Pascal sort. It has all the weaknesses and restrictions of Pascal and none of the strengths of C. Next class we’ll write a true C sort that’ll sort anything of any size.” Original Pascal was a teaching language. It was micro as was its intent. C was a professional language. It was macro, though it could be micro.

    But if programming is like knitting a sweater or building a home or a skyscraper, what are the differences in those levels? Micro and macro. But on many levels within there are micro and macro considerations. I was never a big fan of top down programming. I often found that the top down method was really just an outline that often was easily understood inherently and if that outline was made more detailed it still couldn’t get to the core problems that would be encountered only when directly approached. This I see as the interface between the micro and macro of a problem and program. It was the same for bottom up. One built the tools to handle a problem and then one integrated them into a larger program to deal in a coordinated manner with the multiple needs the tools were built for. That integration is again the micro-macro interface.

    You describe Knuth’s use of generic assembly language, but assembly language itself has evolved and has more high level language structures. And in cases where those structures aren’t at a high enough level, macros can be used. The original Lotus 1-2-3 was written in assembly language but apparently a heavily macro based version. This might be thought of as a custom high level language or as an assembly language with many pre-built tools to handle the tasks involved.

    I also taught 80×86 assembly language. One of the things I described for late in the course was recursion. It was ironic for me in that when I first took programming there was a part of the course that involved simulation of recursion (Augenstein and Tenenbaum’s Data Structures text – the PL/I version in those days).

    I don’t think I ever got a good understanding of simulation of recursion, probably because any important “forests” were lost in the “tree” tools needed to handle the problem. The stack that had to be created and the program methods needed to use the stack and handle the simulated code were not trivial to a fairly beginner programmer.

    Ironically, the low level assembly had a stack structure built-in and so in teaching the use of that built-in structure it seemed easier to implement recursion and understand that implementation and at the same time gain some insight into the power and flexibility of assembly language. The stack structure within a chip architecture I believe was an early Intel advancement.

    An aside: I once converted the C standard library to assembly language (not that difficult in that C has assembly language output as an option). I then wrote macros to implement scanf and printf (the fundamental I/O methods in C). It was an excerise after writing an equivalent to the mainframe assembler’s “Printout *” macro.

    I think all of these things I’ve mentioned involve the interface between micro and macro and the complexity involved. It doesn’t break down into a single problem though it may involve a single method of approach to a micro-macro interface problem at hand. The gotos that Knuth doesn’t like seem to be the ones that traverse the micro-macro interface – that go from one micro to another micro making the macro involved linked by spaghetti.

    …

    The other day I saw a link at Digg.com to visual demonstrations of sorting algorithms. These are excellent views of the macro of sorting where the algorithms themselves often are obscured by their micro nature.

    Visual demonstration of various sorting algorithms:
    http://www.cs.ubc.ca/spider/harrison/Java/sorting-demo.html

    And:
    http://www.cs.rit.edu/~atk/Java/Sorting/sorting.html

    I remember years ago writing a program that printed out the swapping of rings for the Towers of Hanoi problem. This was on a main frame line printer with each page showing a swapped ring. A sort of flip card animation. :P

    Micro versus macro. Giving people a clear understanding of the intricacies of programming and the skyscrapers they can create – with escalators and high speed elevators.

  2. David Carlton

    November 17, 2006 at 10:04 pm

    I read this (along with other articles collected in his Literate Programming volume) a bit over a year ago; it was fascinating. Really interesting, well thought out, well written, but I could only dimly imagine the context he was coming from. There seemed to be strange restrictions at play that I could only guess at – otherwise, the arguments made very little sense to me. So my main reaction was “I hope programming languages change for the better as much over the next thirty years as they have over the last thirty”.

    But, despite that, there were some wonderful hints of the world to come. My favorite was the possibility of automated refactoring tools, which took a couple of decades to appear, and which still aren’t as widely available as I’d like.

  3. Brian Slesinsky

    November 18, 2006 at 12:48 am

    It was pretty obvious why you’d picked the earlier articles in this series, but I wasn’t so sure about this one. It’s a well-written account of what it was like during the transition to structured programming, but at first it seemed to be concerned with things we’ve largely moved beyond.

    One claim to fame is that this seems to be the paper where Knuth famously repeated that “premature optimization is the root of all evil.” His criticism of programmers who over-optimize at the expense of readability makes as much sense today as then, as does his warning that one shouldn’t guess at what part of the program needs to be optimized.

    But ironically, much of the paper is about techniques that we now consider premature optimization. Improvements to compilers have made many optimizations obsolete. The tradeoff is that they also make it very difficult to reason with much accuracy about what effect a change in a program will make to its performance. Instead, we write benchmarks and do experiments on real machines.

  4. Jim Jinkins

    November 19, 2006 at 5:21 am

    I enjoyed Knuth’s exploration of language features that would help in writing clear and efficient code by taming the generalized goto statement. They are still worth reading today, even after language designers seem to have converged on break and continue statements to control loops.

    I wish he had added a section titled “PL/I exceptions considered harmful.” It might have shortened the time until new languages tamed the generalized exception statement.

  5. Cleo Saulnier

    November 22, 2006 at 12:29 pm

    I like hardware and assembler programming and I thought Knuth’s paper was tedious to get through. I read it a while back and from what I remember, it was a constant battle between high level construction and low level optimizations. If there was any way to describe it, I’d apply Joel Spolsky’s leaky abstraction where low level details seep up to the top.

    I think Knuth was trying to say that there shouldn’t be any reason why you can’t use high level constructs, but it shouldn’t be at the cost of simple optimizations. Note that this goes beyond what compilers can provide. But current languages are all or nothing. You can’t build software and then use the fastest implementations. Current software has both design and implementation together and so we are constantly fighting two extremes. This is the wrong way and Knuth’s paper is just another example in the long list of items wrong with software development. (Note that “interfaces” do not solve the implementation vs. design problem. It just makes it worse.)

  6. saba

    December 16, 2006 at 11:20 pm

    i want a project in pascal language about matrixes.

    please guide me …

    many thanks.

    saba

  7. Joel Neely

    December 19, 2006 at 10:16 am

    Knuth’s paper still has some important lessons, the most important of which is to avoid swinging between extremes and holding on to older ideas, two themes that characterize much of the history of computing (and science in general). I’ve seen both Paul Dirac and Max Planck credited with the statement that quantum mechanics would become the standard model in physics only when all of the old physicists had died. In one sense we’ve learned much since 1974, and in another sense we’ve learned very little.

    In the earliest days of the field, computers were expensive, rare, slow, and limited in capacity. Much effort was spent in micro-optimizing to squeeze the maximum benefit out of the beast. Unfortunately, since the next generation of programmers was taught by the first, that obsession continued even after some of the most severe constraints began to ease. I’m referring here to the mentality that almost instinctively prefers to write the equivalent of “a = b

  8. Joel Neely

    December 19, 2006 at 10:18 am

    The breakage above occurred when I wrote two consecutive less-than symbols to indicate left_shift. Apparently WordPress doesn’t like that. With apologies for the partial duplication, let me re-submit the complete post, with the offending characters replaced by words!

    Knuth’s paper still has some important lessons, the most important of which is to avoid swinging between extremes and holding on to older ideas, two themes that characterize much of the history of computing (and science in general). I’ve seen both Paul Dirac and Max Planck credited with the statement that quantum mechanics would become the standard model in physics only when all of the old physicists had died. In one sense we’ve learned much since 1974, and in another sense we’ve learned very little.

    In the earliest days of the field, computers were expensive, rare, slow, and limited in capacity. Much effort was spent in micro-optimizing to squeeze the maximum benefit out of the beast. Unfortunately, since the next generation of programmers was taught by the first, that obsession continued even after some of the most severe constraints began to ease. I’m referring here to the mentality that almost instinctively prefers to write the equivalent of “a = b left_shift 2 + b” instead of “a = b * 5” because “everybody knows that shift-and-add is faster than multiplication”. Bottom-up thinking was the order of the day.

    Dijkstra, Hoare, and others encouraged us to think first at a high level about what needed to be accomplished and only afterwards concern ourselves with implementation. Some practitioners dismissed their advice as impractical, while the snake-oil salesmen trivialized their vision into catch-phrases such as “don’t use go-to statements”.

    Knuth argued eloquently and forcefully that the middle road was more appropriate for the practitioner than pushing to either extreme, but he also argued clearly for applying the right level of hack-the-code-for-speed depending on context. Although the development pattern of “build it, profile it, and only then tweak the critical hot spots” has been on the table now for over thirty years, it still isn’t treated as the standard model!

    On the other hand, probably the best measure of how far we’ve come is to take the code fragments Knuth discusses and rewrite them for oneself in a contemporary language. His anecdote on challenging Val Schorre to write an eight-queens program without go-to statements is fascinating, considering that a good first-year computing science student would be expected to do that as a simple homework assignment these days.

    As a side note, in the early 70s I worked on a system (Burroughs B-1700 series) which was years ahead of its time. We would now refer to that system as a 24-bit RISC processor, with multiple virtual machines as part of the system software (one for each programming language: COBOL, FORTRAN, etc.) The operating system was written in an Algol-like high-level language called SDL. That language’s virtual machine had some very interesting ideas which relate to our discussion of Knuth’s paper.

    One of these was to recognize that in-line blocks could be transformed into anonymous, no-argument procedures. (Since these procedures were in the same lexical scope as the surrounding code, entry and exit required only a simple stack-based manipulation of the program counter, not the full set of mechanisms required for functions with arguments and returned values.)

    For example (using C/Java-like notation for familarity):

    if (conditional_expression) {
    sequence_of_statements_for_true
    } else {
    sequence_of_statements_for_false
    }

    contains two blocks that would be compiled into these out-of-line lightweight procedures. The if-then-else construct itself was compiled into code that would evaluate the conditional_expression (leaving a boolean on top of the virtual machine stack), followed by a two-argument op-code whose arguments were simply the references to those procedures.

    “Falling through” such a block simply returned to the next instruction past the point of invocation, in similar fashion to falling out of a void function/method in C/Java.

    An interesting consequence of this approach was that the equivalent of the C/Java “break” statement was understood simply as exiting an anonymous procedure! This could be used in any context where one had written a code block: loops, conditionals, or even in-line blocks.

    I still recall the mental shift of thinking of a block as a mechanism intended to accomplish a result, and exiting that block as acknowledging that the result had been achieved (rather than simply as jumping around in the source code). I remain impressed by the extent to which our chosen metaphors both encourage some lines of thought and inhibit others, and by how much we need people such as Dijkstra, Hoare, and Knuth who challenge us to find new metaphors and consider what that shift in thinking might allow us to do.

  9. hiutopor

    September 18, 2007 at 12:33 am

    Hi

    Very interesting information! Thanks!

    G’night

Trackbacks

  1. Presseagent says:
    December 31, 1969 at 4:00 pm

    Der Frühling ist da – meteorologisch zumindest.
    (Bild: MabelAmber)

    Da sind sie wieder, die Hacker im Hoodie und ohne Licht – diesmal aus Russland. Im Frühling kommt eben alles wieder, hat Hal Faber beobachtet, vielleicht ja sogar die De-Mail.

    Wie immer möchte die Wochenschau von Hal Faber den Blick für die Details schärfen: Die sonntägliche Wochenschau ist Kommentar, Ausblick und Analyse. Sie ist Rück- wie Vorschau zugleich.
    Was war.


    *** Frühling ist’s, meteorologisch gesehen. Amsel/Drossel, Fink und Star und die ganze Vogelschar, alle sind sie wieder da. Ja, auch die sind wieder da, die komischen Vögel in Zeitungsillustrationen, die am Rechner Kapuzen tragen und von „Sicherheitskreisen“ irgendwo in Russland verortet werden. Bilder von Hackern, die zur Illustration des Bundeshack herhalten müssen, obwohl in 99 Prozent aller Fälle das schadhafte VBscript von ganz normalen Programmierern geschrieben wurde, die sich hier und da einen kleinen Scherz erlauben. Auch sie sind wieder da, die Politiker, die sich vor laufenden Kameras heftig empören, nicht „rechtzeitig“ darüber informiert worden zu sein, dass das „bislang als sicher geltende Datennetz des Bundes“ gehackt worden ist. „Als sicher geltend“ ist die geschmeidige Umschreibung der Tatsache, dass der geschlossene Informationsverbund Berlin Bonn (IVBB) nur an zwei Stellen Übergänge ins normale Internet aufweist, die zudem von Spezialsoftware des Bundesamtes für Sicherheit in der Informationstechnik rund um die Uhr überwacht werden. Sie sind wieder da, die Schnellwisser, die meinen, mit dem guten Linux wäre das alles nicht passiert und die nationale Souveränität Deutschland wäre obendrein noch gesichert.
    *** Und sowieso nie weg waren die beamteten Holzköpfe des Bundesinnenministeriums, die jetzt der Presse einen Vertrauensbruch vorwerfen. Die davon reden, dass es 100-prozentige Sicherheit eben nicht gibt und dann solche Sätze sagen: „Insofern kann ich Ihnen auf eine Frage ‚Geschieht gerade etwas oder geschieht gerade nichts?‘ nie eine hundertprozentige Antwort mit Ja oder Nein geben.“ Ein Quäntchen dies, ein Quäntchen das und immer recht freundlich bitte. Nur zur Erinnerung: dieses Ministerium leistet sich nicht nur Erkenntnisphilosophen als Sprecher, sondern auch eine neue Behörde namens ZITiS, die Wege und Verfahren austüfteln soll, wie man von einem unscheinbaren Nebenposten aus wie dem der Bundesakademie für öffentliche Verwaltung ein Netz geduldig auskundschaftet. So lange, bis man weiß, wo die Leckerlis für eine kleine „Online-Durchsuchung“ liegen, mit anschließendem unauffälligen Datei-Abtransport – sie nennen es „Ausleitung“. Wie schreibt die tageszeitung in einem Kommentar treffend: „Statt konsequent Sicherheitslücken zu stopfen und für starke Infrastrukturen zu sorgen, will die Regierung künftig selbst verstärkt Sicherheitslücken aufkaufen, um besser spionieren zu können. Wer sich daran beteiligt, muss sich nicht über löchrige Netze wundern.“
    *** ZITiS ist auch für die Entwicklung oder den Einkauf von Trojanern zuständig, wie sie monatelang bei der Bundesakademie geparkt wurden, ehe der „Marschbefehl“ von einem Command- & Control-Server kam. Beruhigend zu wissen, dass der Angriff unter Kontrolle ist und man die Bösebubensoftware nur zu Studienzwecken weiter werkeln lässt. Beruhigend ist es irgendwo auch, dass mit der Ukraine ein weiteres Land den Befall mit der Schadsoftware gemeldet hat – wir sind nicht allein. Etwas beunruhigender ist die Nachricht, dass ein „befreundeter“ Geheimdienst den Hinweis auf den Angriff gab. Im Umkehrschluss heißt dies, dass unsere Dienste und sonstigen Abwehrzentren nichts registrieren konnten, dass die beiden vom BSI bewachten „Übergangspunkte“ eine Schwachstelle haben und die „Turla“ programmierenden „Russen“ wohl die technisch versierteste Hackertruppe sind wo gibt. Huch, vielleicht sind es sogar Außerirdische (PDF-Datei)! Aber was wollen die dann in unserem Deutschland? Wir haben ja noch nicht einmal ein Heimatministerium.
    *** IF Abgastest GO TO Modul NEFZ. Schlagend zeigt diese kleine Programmzeile, wie die Automobilbauer schummelten, wenn der Bordcomputer merkte, dass ein Abgastest nach dem „Neuen Europäischen Fahrzyklus“ bevorstand. 50, in Worten fünfzig Jahre ist es nun her, dass Edsger Dijkstra vor den überaus schädlichen GO TO-Statements warnte (PDF-Datei). Dieses andere Erbe der 68er, als man noch daran glaubte, dass es formal verifizierte, absolut korrekte Programme geben könnte, sollte man nicht vergessen. Ob Goto-Statements schädlich sind, gar tödlich sein können oder ob man mit Donald Knuth das strukturierte Programmieren mit GO TO nur als Symptom einer schädlichen Programmiererei begreift, wird bis heute in der Informatik genüsslich diskutiert.
    *** Bleibt der stinkende Diesel. Ist ein Leben nach dem Diesel möglich? Das ist die bange Frage nach der Entscheidung des Bundesverwaltungsgerichtes, das Dieselverbote von Städten zulässig sind. Nun heißt es Handeln, nicht Jammern. Aber es heißt auch Zuhören und Nachdenken: Wenn der ehemalige Automobilmanager Edzard Reuter bei Markus Lanz erklärt, das seine Branche „den Anstand beim wirtschaftlichen Handeln verloren“ hat, ist das schon eine Hausnummer. Reuter äußerte die Meinung, dass die Käufer von Diesel-Autos sehr wohl einen Wiedergutmachungsanspruch haben. Auf Kosten der Hersteller umrüsten, die das Problem mit voller Absicht und krimineller Energie verursacht haben, das ist eine durchaus verständliche Ansicht. Erschütternd, wie in der Sendung vor Lanz bei Maybritt Illner ein deutscher Wirtschaftsminister (Bernd Althusmann, Niedersachsen, damit Miteigner von VW) sich über ein angebliches „Geschäftsmodell“ der Deutschen Umwelthilfe lustig machen kann, ohne ernsthaften Widerspruch zu bekommen. Auch eine Änderung der Straßenverkehrs-Zulassungsordnung zum nachträglichen Einbau von AdBlue-Tanks und damit einhergehend der Neueinstufung von Dieselfahrzeugen wird abgelehnt, obwohl dies vom Verkehrsministerium mit Zustimmung des Bundesrates durchsetzbar wäre. Die Sauerei hat System. Das Tüpfelchen auf dem i-Haufen sind dann die Nationalisten von der AfD, die eine Kampagne für „unseren deutschen Diesel“ starten, der so sauber ist wie unsere deutsche Sprache.
    Was wird.
    Es ist noch nicht ganz sicher, ob die USA unter Präsident Trump einen Handelskrieg um Stahl und Aluminium riskieren werden. Mit den von Trump erwähnten Strafzollaufschlägen tun sich selbst Mitglieder seiner Partei schwer. Sicher ist jedenfalls, dass der TV-Auftritt von US-Handelsminister Wilbur Ross mit Getränkedosen und Cent-Berechnungen die beste Fake-News-Produktion der Regierung Trump gewesen ist, sieht man von den TV-Inszenierungen von 45 ab. Weit abseits der Bierdosen und Campbell-Suppen dürften die Auswirkungen bei Firmen wie ThyssenKrupp erheblich sein, weil die USA den zweitgrößten Absatzmarkt nach Deutschland bilden.


    Präsentation zum MKS 180
    Womit die Wochenschau schon wieder beim Hauptthema dieser Woche sind: wegen mangelhafter IT-Sicherheit wurde ThyssenKrupp Marine Systems und die Lürssen-Werft als Partner vom Vergabeverfahren zum Bau der großen Mehrzweckkampfschiffe MKS 180 ausgeschlossen. Viel Stahl wäre ja da, aber fehlende IT-Sicherheit, das ist ein schwerer wiegendes Argument. Waren die so gern bemühten russischen Hacker unter ihren Hoodies schon dabei, sich in der Bilge breitzumachen? „Die Schutzbedarfsanalyse ist für das Projekt MKS 180 von zentraler Bedeutung, um Schwachstellen der Systemauslegung, des Betriebs und der Betreuung zu begegnen – insbesondere im Zusammenspiel zwischen ‚klassischer Informationstechnik‘ und Plattformtechnik. Ein ganzheitlicher Schutzansatz wird erstmals bei MKS 180 umgesetzt. Er ist Bestandteil des kooperativen Projektaufsatzes“, hieß es noch ganz selbstbewusst bei der Koblenzer IT-Tagung im September 2017, als die besondere „Cyberresilienz“ von MKS 180 gefeiert wurde. Interessant nicht zuletzt darum, weil der Vortrag zu MKS 180 unter dem Punkt Bedrohungen mit Bildern aus den Snowden-Files zum Verfahren der Tailored Access Operations illustriert wurde. Der neue deutsche IT-Dampfer als Ziel der NSA? Sachen gibt’s, da eist der Frühling ein.
    Nicht nur der Informationsverbund Berlin Bonn ist kaputt, auch der elektronische Gerichts- und Verwaltungsmailverkehr liegt zu weiten Teilen brach. Schuld daran hat die Client-Software, die Anwälte zum Absenden und Empfangen ihrer Schriftstücke beim besonderen Anwaltspostfach einsetzen sollten. Die Funktionsweise dieser Software wurde vom Darmstädter Erfa-Kreis des CCC analysiert und als besonderer Ausnahmefehler bewertet. Am Montag treffen sich interessierte Anwälte in Berlin und fragen auf einem Symposium, wie ein neues beA+ aussehen könnte. Mit dabei ist nicht nur Markus Drenger vom CCC, sondern auch Leslie Romeo, derzeit Sprecher der Arbeitsgemeinschaft De-Mail. Er wird sich ins Zeug legen, den Anwälten das vor sich her dümpelnde De-Mail-System schmackhaft zu machen, das in vielen Punkten beA ersetzen könnte. Vor vier Jahren, als die Anforderungen an beA bei der Bundesrechtsanwaltskammer formuliert wurden, war De-Mail schon einmal eine Option. Der Vorschlag starb, als die IT-Berater kamen.
    (Hal Faber) /
    (mho)

  2. Quora says:
    September 27, 2011 at 5:40 pm

    Are there useful control structures not captured by the mainstays of structured programming?…

    The examples which come immediately to mind are coroutines and continuations*, but one could also look at generators, backtracking (as in the Icon language†), and perhaps Professor Donald Knuth’s paper, “Structured programming with go to statements”…

Leave a Reply Cancel reply

Your email address will not be published.