[ FILES: Consulting | Applications | Languages | Code Geneneration | Year 2000 ]

Year 2000, The Millennium Bug,
and Large Code Body Re-Engineering


According to Computer RISKS Digest, reporting on studies by Morgan Stanley and The Gartner Group, estimates are that it costs $1.50 per line of code involved in Year 2000 repairs. National estimates range from 300 billion to 600 billion for all projects. Estimates keep creeping up.

Instead of spending a dollar fifty inspecting each line of code, why not cut to the relevant code as quickly as possible? Wouldn't that dramatically minimize the time, effort, and cost?

There are tools that do that already, many of which use something like GREP to produce reports that you can study and debate, and run over and over as you chase variable names. That can take time!

We do it faster!

We let you find relevant code in seconds, let you interactively walk the subroutine calling chains as well as the assignment chains up an down without executing the code in cumbersome debuggers.

We let you follow the use of variables, subroutines, and functions across multiple source files, even multiple programs, if you like.

We can also link in design documents, notes, bug lists, and other materials to produce a complete knowledge system that anyone can use.

And, we provide a way of managing that code from then on.

Let's back up a second. It is said that half the problem, is defining the problem before you leap in trying to solve it.

The Problem:

How can we deal with the Year 2000 problem and other problems related to tracking and repairing large bodies of code without becoming bogged down searching for variables and subroutines in a desk top of paper?

Re-engineering legacy code presents three fundamental problems:

Finding Things

You have a hundred subroutines and perhaps a thousand variables. Which is used where? How do you find it? How do you find Where Else it is used? Where else it is set? Do you manualy try to chase down all declarations in a dozen printouts? Do you use paper clips on the edges of the printouts as in the days of yore, as you GREP and inspect the code?

What if it isn't a variable, but an error message you saw on the screen as you ran that program? How hard is it for you to take a piece of text from a message, trace it to a prompt, and walk the calling chain up and down to see what happens before, and after, that message is presented? Minutes? Hours? Or days?

What would it mean to you if the answer was seconds to get to each called routine?

Following Calling Chains

It is one thing to look at flowcharts, but once you turn to the code itself, once you have a suspicious routine named, how long does it take you to find and view the actual sources of the routines that routine calls? And what of examining the source code of the routines that called that routine? Do you thumb through printouts? Type in file names? Does this take minutes? hours? Or days?

What would it mean to you if the answer was seconds to get to each use in each file? Without typing file names!

Following Assignment Chains

How hard is it to find all uses of a variable? Yes, a simple GREP may do, but Where did the data come from? How hard is it to follow the changing variable names back to the source where that data came from?

Isn't that the key question in solving the Year 2000 Bug? Where did the date come from?

What would it mean to you if this answer too, was seconds per variable name, seconds per data assignment? Without typing variable names!

How can we do this?

Come, e-mail and let us discuss your problems and how our proprietary solutions can help you.


Copyright (C) 1996, J Consultants
j@mall-net.com
[ FILES: Consulting | Applications | Languages | Code Geneneration | Year 2000 ]
Attention agents: This does not constitute permission to submit my qualifications for a position.
Submissions may only be made where specific permission has been granted in advance.