As can be seen even in the s some thought had been given to how one would establish reliability; essentially this was by reference to the length of period during which a computer system had been functioning normally and properly, plus indications of circumstances in which it had not. The certification provision for civil proceedings was dropped in the Civil Evidence Act s 15, pointing to a schedule, and for criminal proceedings in the Youth Justice and Criminal Evidence Act , s After these repeals the issue of computer evidence became one largely of weight as opposed to admissibility.
But this is a matter for judicial discretion. It can give defence counsel two bites at the cherry — to seek to have evidence excluded for unfairness and if that fails, to argue in front of a jury that the evidence is unreliable. The comments made by the experts consulted by Stephen Mason take these admissibility tests a little further, no doubt because they reflect the experience up to as opposed to what was thought in the s.
The salient conclusion, with which I concur, seems to be that the percentage degree to which a system can be said to be free from bugs if such a figure can actually be determined for a complex system with a long history of development and modification does not indicate whether important bugs may still linger and be important in particular circumstances. The current position seems to be that evidence from a computer is routinely admitted and there is a rebuttable presumption of reliability.
In other words a court will accept the output of a computer without looking for specific reassurance unless it is challenged, at which point it is up to the party producing the specific output to persuade that in the particular circumstances that the evidence should be accepted for weight. Stephen Mason points to the Criminal Justice Act s Computer technology is in a state of constant evolution so that one of the problems faced by designers of security standards is that it can be possible to achieve a certificate of compliance with a standard such as the ISO series even though the system itself is vulnerable because the checklist used has become obsolete.
We could also look to some of the questionnaires generated to assist those seeking compliance with GDPR and need to show they have been through the necessary processes e. In the end the only test will be specific investigation of an alleged flaw which is material to particular circumstances which is the subject of a litigation process.
Analysing those circumstances is not a matter of generalised tests of robustness or reliability but a function of adequacy of disclosure. In English law the detail is given in Civil Procedure Rule If all the umbrellas work as specified at 10 mph, then you have designed a very reliable product. Please Sign in Register. This topic has 4 replies, 3 voices, and was last updated 4 years, 3 months ago by jo.
Viewing 5 posts - 1 through 5 of 5 total. February 23, at pm February 24, at am Note: I'm still not going to discuss the definitions of what a specification is in this instalment. What a tease A perennial debate within and without the software community is whether software development is an engineering discipline, and, if not, why not.
Well, despite plentiful mis use of the term 'software engineer' in my past, I'm increasingly moving over to the camp of those whose opinion is that it is not an engineering discipline. To illustrate why I'm going to draw from three of my favourite branches aspects of science: Newtonian physics, Chaos theory, and Quantum physics, with a modicum of logic thrown in for good measure.
As my career has progressed - both as practitioner programmer, consultant and as imperfect philosopher author, columnist - the issues around software quality have grown in importance to me. The one that confounds and drives me more than all others is what I believe to be the central dichotomy of software system behaviour:. The Unpredictable Exactitude Paradox: Software entities are exact and act precisely as they are programmed to do, yet the behaviour of non-trivial computer systems cannot be precisely understood, predicted, nor relied upon to refrain from exhibiting deleterious behaviour.
Note that I say programmed to do, not designed to do, because a design and its reification into program form are often, perhaps mostly, perhaps always, divergent.
Hence the purpose of this column, and, to a large extent, the purposes of our careers. The issue of the commonly defective transcription of requirements to design to code will have to wait for another time. Consider the behaviour of the computer on which I'm writing this epistle.
Assuming perfect hardware, it's still the case that the sequence of actions - involving processor, memory, disk, network - carried out on this machine during the time I've written this paragraph have never been performed before, and that it is impossible to rely on the consequences of those actions.
And that is despite the fact that the software is doing exactly what it's been programmed to do. Essentially, this is because software operates on the certain, strict interpretation of binary states, and there are no natural attenuating mechanisms in software at the scale of these states.
If one Iron atom in a bridge is replaced by, say, a Molybdenum atom, the bridge will not collapse, nor exhibit any measurable difference in its ability to be a bridge. Conversely, an errant bit in a given process may have no effect whatsoever, or may manifest benignly e.
We, as software developers, need language to support our reasoning and communication about software, and it must address this paradox, otherwise we'll be stuck in fruitless exchanges, often between programmers and non-programmers clients, users, project managers , each of whom, I believe, tend to think and see the software world at different scales.
I will continue the established use of the term correctness to represent exactitude. And I will, influenced somewhat by Meyer and McConnell, use the terms robustness and reliability in addressing the inexact, unpredictable, real behaviour of software entities.
Let's look at some code. Remember the first of the Bet-Your-Life? Test cases from the last instalment [ QM-1 ]:. In fact, it'd be pretty hard to write any implementation other than this.
Certainly there are plenty of possibly apocryphal screeds of long-winded alternative implementations available on the web such as on www. With languages that have a bona-fide Boolean type, such as Java and C , the value may not need to be compared against 0 , and may well be implemented as equal to true or to false. In those, comparison against zero is necessary, even for their built-in bool types!
In either case, it's almost impossible to implement this function incorrectly. If we permit ourselves the luxury of assuming a correctly functioning execution environment, then without recourse to any automated techniques, or even to a detailed written specification, we may reasonably assert the correctness of this function by visual inspection.
Now consider the definition of strcmp , the second Bet-Your-Life? Test case:. Even with a function as logically straightforward as strcmp there are different ways to implement it. What was interesting to me was that I had forgotten the nuances resolved in previous implementations, and I initially wrote the last line as:.
Then being as how I'm writing a column about software quality, and my Spidey-sense is set to max I stopped and wondered how this would work between compilation contexts where char is signed or unsigned. Clearly, for certain ranges of values, the negation result will be different between the two. I then immediately set about writing a test program, and building for both signed and unsigned modes, and saw the suspected different behaviour for certain strings with values in the range 0x80 - 0xff.
I've been programming C for, ulp! Either way, it's quite sobering. What this illustrates all too well is that software is exact, humans operate on assumption and expectation, and the two are not good bedfellows. When I spoke to my good friend and regular reviewer, Garth Lancaster, about this, he too was ignorant of the unsigned comparison aspect of strcmp.
Here's an implementation I knocked up during the preparation of this instalment, without recourse to any I've written in the past or to various open-source and commercial implementations. This additional reliance on external factors is a significant part of the increased complexity over invert.
The possibility of the latter two options makes reasoning about the correctness of strcmp and software entities build in terms of it more complicated than is the case for invert. Specifically, it is possible for strcmp to be passed invalid arguments as a result of a defect elsewhere within the program , whereas all 'physically' possible arguments to invert are valid.
The next Bet-Your-Life? I'm not going to show the full implementation of this for brevity's sake. If you're interested you can download the library [ B64 ] and see for yourself. Like strcmp and invert , the b64 library has no dependencies on any other software libraries, not even on the C runtime library except when contracts are being enforced, e.
This permits a substantial level of confidence in behaviour, because only the b64 software entities themselves are involved in such considerations. Broadly speaking, it means that behaviour, once 'established', can be relied on regardless of other activities in the execution environment.
Consequently, I think it is impossible in a library such as this to stipulate its correctness based on visual inspection of the code; anyone who would do so would be rightly seen as reckless at best. Thus we can see that increasing complexity acts strongly against human-assessed correctness.
But there's more to this than correctness. Let's now consider the final member of the Bet-Your-Life? Clearly the recls library or at least this part of it has substantial behavioural complexity. That alone makes it, in my opinion, impossible for any reasonable developer to stipulate its correctness.
But that's only part of it. Of greater significance is that recls is implemented in terms of a great many other software entities, including library components from STLSoft and operating system facilities e.
And even that is not the major issue. The predominant concern is that recls interacts with the file-system, whose structure and contents can and do change independently of the current process.
0コメント