Oh by the way

December 30th, 2007

On Tuesday the 18th I received an invitation to a meeting on Wednesday the 19th to discuss the implementation of Oracle software to replace our current factory software. Some of the high-level people who were part of the A-Team had assigned various people in the plant to be on teams to document critical functions of the existing system so they could be adequately replicated on the Oracle system. This Oracle implementation was not one of the projects of the A-Team, they were just announcing it as a post-script to their exploits. Not all of the people they had chosen for this documentation teams were invited to the meeting, and many who were invited, like myself, had heard nothing about it until the invitation.

In response to some of my questions, the lead presenter said, “Oh, my speciality is order management. It will be crucial for you to document how due dates and promise dates are calculated and transferred between systems,” which is partly a function of order placement, presently conducted by a sister facility that is being shut down, and the planning department–in other words, neither I nor any other members on my team know anything about it. So I asked to speak with him further. “Oh, I have been here for five weeks and now I am leaving.”

The first part of the documentation is due January 4th. That might seem a little bit of a short notice, if you think of the two days off for Christmas, but when you think that Acme is so frantic to post a big fat revenue number that they would have us working Saturday, Sunday, and Monday (New Year’s Eve), if only we could convince trucking companies to pull freight all of those days, then you can see that our mandated priorities leave us little room to work on this project until the new year. And the first day of the new year is a holiday, since, being a day later, it no longer has any bearing on the financial numbers for 2007. So it is not a somewhat short time frame to do this kind of project, it is very short.

And I figured out why. The actual IT experts will come later. This bit of documentation is only to make the actual user’s life easier by having the system tailored to our needs. Also it is possible that we might give some piece of insight that would help the IT team. But really, it is only us users who have anything to gain from this, so if we don’t have enough time to do a good job, who really cares?

The A team gets an F

December 30th, 2007

Lean Production, based on the Toyota Production System, recognizes 7 wastes:

  1. Transportation. Moving something from one spot to another does not make it worth more.
  2. Inventory. If a miser saves all of his money, what good does it do him? If a company has more product than someone is ready to pay for, what is it worth?
  3. Motion. Any kind of walking, reaching, moving, touching, searching, or other motion, which does not directly make the work in progress closer to what the customer will pay for, is a waste of time and money.
  4. Waiting. When a worker waits, he is being paid to do nothing. When a product waits, it is not returning money to the company.
  5. Overproduction. As long as you are making a perfect product, which will never need to be fixed, updated, or changed, and which somebody will always buy, you can make as much as you want. Otherwise you should only make as much as you can sell now, before something changes.
  6. Overprocessing. If you spend time and money to put a pretty design on a part that is going to unseen inside of the finished product, you have wasted your money. Anything you do that the customer does not want is a waste.
  7. Defects. If you inspect for defects and you do not find any, you have wasted time. If you find a defect, you will have to repair it or scrap the part entirely. If you have to rework a part you are paying twice for a part that the customer will only pay for once.

    The A-Team came to Acme with a mandate from on high, and scared all the management into not questioning any of their decisions. They set about with steely resolve to drastically reduce the waste of Inventory, and this they did–mainly by increasing Transportation.

It's all in my head

December 22nd, 2007

Usually I try to avoid writing about the technical details of my work because it is basically talking to myself; I doubt most of you are very interested in the obscure quandaries of the trade. But I am going to indulge in those details today.

My database died. The error it was producing was no longer fixed by running a compact & repair. Microsoft Access databases have a tendency to develop permanent fatal errors, especially when they are undergoing heavy structural change. Deleting an entire table of data is a more drastic change than deleting all the records from the table; creating new queries is more drastic than running existing queries. A database is meant to continually accumulate more data, of course, but it is least stressful to do this by simply adding information into an existing table, or perhaps changing some existing data. Creating and deleting the places and rules for the data, and the ways it is manipulated, always risks loose ends, orphaned bits of data, gaps, and leftovers. As usual for Microsoft, Access is far easier for most people to use but a lot less resilient and powerful than industry-grade software.

Microsoft Word is great for writing letters, memos, and short documents. It’s bad for writing book-length manuscripts and completely incapable of producing publication-ready electronic document files. Microsoft Access is great for small-scale data management, like maybe a Mom-and-Pop floral shop, but not up to tracking the volume of data that describes the operation of a factory.

Realizing this, I tried to make my last version of my workhorse database complete and concerned with only a limited amount of the data available. There were a few things that I still relied on temporary tables to do, which meant some “high trauma” was an integral part of the database from the beginning. On top of that, I kept using the wealth of information in that database for new purposes, both one-time offbeat requests and new regular data reports. Constantly adding new pieces to the database, and usually not knowing in advance if I would need to reuse the pieces, left me with a mess of tables and queries, tied together in such overlapping ways that I did not know what was critically necessary and what was long since obsolete.

This is why you are supposed to carefully document the reasons, design, and function of your original database, and continue to document all changes. When any work you do on your database is considered superfluous or at best accessory to your job, though, time is too paltry to spend time explaining what you are trying to do.

When an Access database does fail, you can usually fix it by copying all the pieces into a new database. This can be a pain, but it is better than the whole thing becoming absolutely worthless. But with my ever-expanding chores for my database, I wanted to rebuild the database from the ground up to be more robust, able to support more demands.

With the eventual demise of this database certain from the start, I planned to separate the basic building blocks of the database into separated database files, using one database to tie all the pieces together. Also, I plan to do all my ad-hoc work in another database, so that the central coordinating piece can churn out reports automatically without tying up my work time or being disrupted by my innovations.

Because my databases are working parasitically off of the official factory software, combining the data the official system has in disparate, unrelated pieces into coherent meaningful information, the first challenge for me is to actually get the data. I have one database siphoning off the demand inforamtion, one siphoning the inventory information, and anothering siphoning the transactional information, and another siphoning the shipment information. Order and shipment information has to be augmented with data not available from our factory database, which is sent nightly to our site from the hub site. I also have developed some information that never reaches our official database at all, some from the shipping software and some just collected as direct input, regarding claims and productivity. I will be able to tie this into my new database, but I haven’t gotten that far yet.

Coordinating this information is the real trick. A shipment fulfills an order by way of a transaction, depleting inventory. Since my database does not own any of this information but clones it from the official system (a live data connection is too slow and times out), I have to make decisions on when I will refresh my information. The larger data pool I hold the longer it takes to add new data too it (if I use indexing, or data rules that insure the same information is not captured twice, which necessarily involves checking the existing data when adding). Also, larger amounts of data are more likely to develop corruption. But part of the necessity of my database is that the system database does not keep adequately detailed or related information, so if I don’t archive the data for an adequately long time I have not gained anything.

The approach I am taking this time is to keep the most detailed data, which requires the most frequent cloning or sampling from the official system, for a relatively short time, and roll some of that back into a long-term archive that will still be short in terms of serious data retention, but long enough for useful comparisons. I have mulled this over in my head for months and months, trying to find an ideal set of numbers (two different time periods or three, or four? Short data daily or weekly or monthly? Long data for years or quarters or months?), and I can’t remember for certain what I finally wrote down and comitted to, for better or worse. I believe I decided on ten days for short term (a week, plus buffer time for “dud” data days like holidays) and 100 days for long term (about one quarter and one week). One hundred days is not a long term, but at that point I realized I have to just admit that I am not building a real industrial data system, and refer all questions back to the official database, however dissappointing it may be.

Just as I do not have time to document my system, I likewise cannot afford to build it as a coded, error-trapped, well-tested system. I am using quick and dirty macros to run queries that all presume the data is there; that the connection to the official system is working, that the supplementary data has been sent up on scheduled, that nothing has corrupted one of my databases and that the macros to update their information have also run. This is about like planning that it won’t rain a single day for a given two-week period; in our climate, that is possible, and very well might not happen during a dry spell, but you also know that at some point within a year the assumption will not hold.

Thus, even though I need my routines to run automatically, they have to run within my realm of awareness so that I can fix the problems and get them going again when something breaks down. But when an automatic routine is running, I am unable to use the database that is working or to open up another database; and the morning routine opens all kinds of windows that get in the way of what I am trying to do. I solved this by taking advantage of a computer that is used for little besides printing off some extra shipping labels as needed. The computer used to sit in the break room but P.B. had it moved into the office, which, although he had his own reasons for doing it, provided me with the perfect vehicle for doing my slave work.

The dangerous thing about this system is that is uses a generic log in. Almost anyone in the factory could get on the computer while it was doing whatever, and get in my database. So far I have never secured my databases, because it is incredibly easy to lock yourself out of a database (as I have done several times); but one thing I would like to do with the new database system is allow other people in the factory to access some reports themselves, providing their own parameters, so that my time is not taken up with requests for ordinary data. I would use another, locked-down database to accomplish this, but it would still behoove me to have ever database in my system secured to prevent accidental or incompent corruption of a fundamental part of my data network. Of course this is completely undermined if I have a generic account running my most sensitive administrative tasks.

The most essential pieces of my new system are now operational. Without any bother from me, the information is being stowed away for later use, available to be analyzed and compared and referenced at any time later. Now I need to begin opening that up into actual useful reports, and that is where the system that I built is exposed to abuse, damage, and ruin. By separating key data into distinct databases, less of the system should go bad at any one time, and it should take less time to rebuild. But the keystone database that coordinates all the information, and uses layers upon layers of queries to accomplish it, is already turning into a maze.

I wanted to have several coordinating databases, so that each long-term stable report could run on its own, but that is not possible without sophisticated coding, because besides relying on the same base tables of information, many of my reports rely on the same sub-queries that have to be tweaked and updated. It is hard enough to maintain version consistency between the queries used in the basic databases and their clones in the coordinating database (because, with ordinary Access means, you cannot share queries between databases the same way you can share tables).

All of this could be greatly improved on, but it would take more time, to be coded, or more money, for a better system, or–heaven forbid–an central, official system that is properly designed for data storage and open for retrieval so that none of this fooling around with Access hacks is necessary. But I shouldn’t complain, as it is my ticket to developing my skills in database concepts.

In this overview of the larger structure of my database, I haven’t gotten into the considerations of sorting away the data, how it is divided up and kept most efficiently, and various compromises on those principles for sake of usefulness. But this is what I like to do.

Only a little smoke

December 8th, 2007

Twice this week I came in to find the database that was supposed to be cranking out morning reports instead saying, “Invalid argument.” For those without any coding background, this is basically saying “The thing you want to use can’t be used that way, or maybe it doesn’t even exist.”

In a previous generation of the same setup this would happen if the routine had previously failed midway through execution, because a table would have been deleted and not recreated (the error message was different, though: “Object not found”). Because of this very problem, in my last redesign I made sure that no tables were deleted and recreated so that this would not happen.

The first time I saw this message I figured something had been accidentally deleted or that something was corrupted, and I would most likely need to recreate the database–either copying all of the objects out of the old into a new, or just building the third generation of the database. I have been wanting to build the third generation for months now, but my needs for this next version entail a complex of separate databases interdependent on one another, which all need to be updated off of the real, official plant software, and have to be updated in the right sequence to produce useful data.

It will be a big project, one that requires at least periods of intense concentration to think it out in an orderly way, and I have been trying to get my direct-value work done first. (Direct-value–I made up that bit of business jargon to mean the stuff I do that has an immediate result for people who respond to my work.)

Just so as not to abandon the ship when a little bailing would do the trick, I ran compact & repair and that actually did make the problem go away. Except that it appeared again two or three days later. Compact & repair worked again. If this were a machine at Acme and not a piece of software, we would say that it is good working order. But then that’s just because we don’t understand TPM.

I persist in thinking that one of the reasons I was hired to work directly for Acme was to retain my skills with Access as a way to get useful information presented with a modern manufacturing perspective out of an old system built around outdated manufacturing theories. Whether this was part of the original thinking or not, it has not been much of my actual job. My boss does not seem to be very interested in this cabability of mine. While I have made time to put together a few reports that add valuable information to what is readily available, these reports seem to be mostly ignored, and I do not often field requests for regular reports on key business functions. I have had drive-by requests for good enriched data, and many requests for momentary data, but not a request for serious sustained data with the clearance to spend the time it would take to deliver that kind of result.

My database needs to be rebuilt and I would love to do it, but I am not sure that it is actually my job or that my boss would appreciate my using the time that way.

After all, what I’ve got is only smoking a little.