Data Migration Checklist For Business

HomeArticles Posted by Birket Foster

Here’s why a data migration checklist is important to follow. A project to move business application data between databases seems like ONLY a technical issue, but when examined, it becomes obvious that it also has business considerations. For example, the decision on how many years of historical data needs to be moved, is a definite business decision, not an IT decision.

The primary reasons why a project to move data is started are:

MBFoster has participated in 300+ database moves in since 2003. We have a proven method that organizes an enterprise for business data migration. Migration must be disciplined and must enforce the roles that business users play in governance and validation. 

Legacy Migrations

An enterprise may be moving off a legacy system to a new database and operating system. The application must be brought over to the new environment, while preserving the unique business logic from a custom application.

Some of the considerations will be technical, but the business users need to be involved in the planning and execution, especially the verification of the data and the application in the new environment. 

New Business Applications

When a new application is deployed, often it is not feasible to rekey all the data from the previous environment, so rescuing the data is needed. We help customers plan a data move from the source to the target.

There may be new fields in the application that have no equivalent in the source. The user community needs to be involved in deciding how the business will use the new fields. Can the value be derived from other fields? Do they have to be calculated? Are you adding GIS locations on records as you move towards a digital era? There are sometimes fields that are in the source, but not in the target.

There may be reports or spreadsheets that drive decisions based on those fields. The user(s) need to understand the fields that are being dropped or changed, so that the new reports or spreadsheets are adjusted for the differences. 

Changing Databases

Some of the projects involve moving data between different databases, while keeping the application the same. Users’ roles include confirmation of how much of the history is required and to be involved in the testing and validation.

User Acceptance Testing (UAT) verifies the data is moved properly and completely. The specific plan to confirm the completeness and accuracy of the data migration should be done during the early stages, so that the business agrees on the decisions regarding data validation and can plan the users’ time for testing the results after the data migration. This communication is invaluable in improving user’s trust of the results from the data migration. 

Decommissioning

Even decommissioning requires user input – we help customers with preserving data in a format to support inquiries against historical data. Stakeholders from the organization who must account for the legal, financial and informational aspects of the data in question, must be integral participants in the process.

Often the data must be in a form that can be queried without the benefit of the original application where a report or a screen would place the data in context. A raw dump of the underlying data does not provide enough context to meet compliance, or audit standards. Business users must specify how the data is to be delivered, so inquiries both internal and external can be satisfied at the level of data required. 

Get More Value From Your Data 

Many customers include in their data migration projects a plan to use their data to better understand the business operations and derive new knowledge. The result is the MBFoster team are involved in getting the data prepared for analytics – looking at what KPIs (Key Performance Indicators) the business would like to track, where the data can come from to feed a dashboard measuring the KPIs and how it can be curated into the right form to assist in timely decisions.

By understanding the business needs we can help the customer get the data in an accurate and timely manner. We have helped build real time dashboards with synchronization between disparate database types to enhance the business value of the data that the business is paying to capture. 

Critical to the success of migration projects is getting the data correct which requires a team effort – users working with IT to assess, plan and execute on the migration of the data. When IT works closely with the business users, the conversations around the planning and testing will reduce surprises. 

Data Migration Checklist to Follow

Here is a checklist for the business to help with planning the framework and governance required. 

  1. Assign a project manager.
  2. Inventory the stakeholders.
  3. Verify the historic data requirements with each stakeholder category (Finance, Legal, etc.)
  4. Assign a user to oversee governance to ensure the migration is defined from a business viewpoint.
  5. Identify the Data Stewards persons who are extremely knowledgeable about data related to a workflow. They ensure the data is correct for that workflow and help define the rules for data – they will also be involved in deciding if the migration is correct and complete.
  6. Schedule ample time to allow users to be involved in the planning of data in the new environment. Data definitions may change between the old and new systems and there may be new classifications.
  7. Work with IT to plan a schedule for the project.
  8. Allow time to review the data in the new system – look at screens and reports to identify any anomalies. Check periodic reports (daily, weekly, monthly, annually).
  9. Work with IT to validate data. This may include help on MDM (Master Data Management).
  10. Practice the “go-live” several times to ensure that:
    1. The time window for the migration is understood
    2. The data validation process (usually reports) works
    3. All team members, IT and users, know what they will do during the migration “go-live”

Data migration is about moving the business value asset (data) safely without loss or data degradation. The business users need to be involved from the beginning of the project to help make it a success! 

If you have questions or suggestions please reach out and have a conversation – you can write me at Birket@MBFoster.com or call me at 1-800-ANSWERS (that’s 800-267-9377) I’m at extension 204. Our website is www.MBFoster.com 

Application Migration – It is a team effort

HomeArticles Posted by Birket Foster

The thing about applications is that they support the workflow for your business, so an application migration has to be done with a plan. The plan has to start at the business end of the organization. First the inventory of the applications should be taken. Then you scorecard them into a portfolio based on the business fit. What are the functions of the application and how does it work for each of the different roles of the business users?

Even if an application fits it might be necessary to have it on a list for transition because of risk from the current infrastructure (hardware, software, or database type or versions), the current level of documentation or indeed the succession plan for the users or the IT staff that support it.

If there are good reasons to do an application migration, it will take a team of business users working with IT staff to organize the new application environment, install the application in its new home – really its development, test and production environments must be considered whether COTS or custom, on premises or cloud deployment.

Excellent leadership, with dynamic project management will be required to move the project across the finish line. Book the time from all parties early to ensure the schedule is not impaired by a resource being on vacation. After the initial data load, followed by unit and integration testing the UAT stage begins. User Acceptance testing has to be done and signed off on before the go live.

It is a team effort – don’t throw anyone under the bus – learn to co-operate and communicate. The communication plan is as important as the WBS (Work Breakdown Structure). Set expectations, and plan carefully to make sure your project is one of the 10% that end up on time and on budget rather than the 55% that are have overruns, or the 35% that are just cancelled.

My team has won awards from customers with whom we partnered for application migration projects. Our customer list speaks for itself. You need to all be on the same page with great cooperation especially if your plan runs into some snags.

On behalf of the MBFoster team – thanks for stopping by.

How to Bridge HP3000 With Other Data to Extract, Clean, Transform and Load

HomeArticles Posted by Birket Foster

The trouble with today’s growing investment in IT is that everything is integrated — and this makes the HP3000 an island of data in your organization. In order to bridge between this platform and others, several things need to occur. The obvious items have to do with what data needs to be moved, when and how often. The technology stack in such a solution will need to address these issues:

For a one-time move — perhaps a special project to get certain information on a group of customers who bought a product over the past two years — the scope is easy and the target can just have fields that are selectively populated.

For a continuous feed of data — perhaps to an ODS (Operational Data Store), datamart or to another application — the problem becomes more complex. After all, the need to move data between platforms is becoming a business driver. We have customers taking advantage of our J2EE technology to integrate into a Java environment with JINI, EJB, JTS and SSL support. All of this has allowed the HP 3000 to play as an equal in the Enterprise Data Bus Architecture. But where to start your bridging sparks a good set of questions.

You can follow these questions to outline your requirements:

  1. Does there need to be a start date?
  2. What data needs to be captured, and how do we identify the data required?
  3. Is it transactional data, or updates to a file selected by timestamp?
  4. Is it synchronous data, or just an hourly, daily, weekly or monthly data sweep that is required?
  5. Once we have some candidate data, how will it be checked for integrity before it is sent to the application or database?
  6. Is there a way to make sure (via audit) that all the transactions were correctly posted to the target database and none were missed?

Synching Data

We have been helping customers with synching data for over 15 years and moving data for over 25 years. We have helped customers with the ECTL process (Extract, Clean, Transform and Load) as well as creating a data quality focus to clean up the data before the project is implemented. We often get to discuss the history, policy and lifecycle of the data.

We ask when and how many transactions of what kinds are produced, and how long are the full details required. We want to know when and how do summaries play in the trending and decision making process. Customers need to know what data they want to share with suppliers and customers. Who needs the data internally, and what else do they need to do their job?

Whether you plan on a project for synchronizing data, or moving it one time, or doing periodic refreshes, there is a framework required before you can start the project.

We have been evolving our solutions to help customers with these problems since we first took data from an IMAGE database and built Oracle “loader” files in 1985. Our UDA (Universal Data Access) series was built with the philosophy that we should be database- and operating system-agnostic. We have evolved to go beyond the HP 3000 to include SQLServer, DB2, Sybase, Ingres, Cache, Eloquence, PostgreSQL and MySQL, all to work with Unix, Linux, Windows, AS400 and more.

The objective is to allow “drag and drop” data transformation between any of the databases regardless of source and target platform. We typically pull or push data at the rate of 5-10 million records per hour.

We still support the HP 3000 with all of its file types – IMAGE, Allbase, KSAM and flat files. UDALink which includes ODBC, JDBC and easy to use MBFReporter capability is being used daily by thousands of users in hundreds of sites. We add new copies as customers discover that they need 64-bit clients to support ODBC access to the HP 3000

For many customers we have also been replacing ODBCLink/SE, a product we licensed to HP from 1996-2006 for bundling into MPE/iX. Now that we are five years beyond supporting that product for HP, we find that customers are moving to new versions of Windows Server or SQL Server, triggering the need for a new client to connect to the HP 3000 data source, or in the occasional case of an HP 9000 running Allbase. We continue to evolve the solution and so have added XML, XLS, and PDF as the output types of reports, CSV, and several self-describing file types.

For the past 10 years, our 3000 customers have been able to use .NET  applications with ODBC and for our RPC mechanism. The RPC mechanism makes XLs on an HP 3000 available to a Microsoft environment just like they are libraries (both .COM and .NET work). The RPC mechanism takes code compiled on the HP3000 (in COBOL, C, ,C++, Pascal, Fortran and so on) and allows the Microsoft based development environment to leverage the tried and true business logic without having to duplicate the logic. This goes beyond data to allow the 3000 more of a role in the architecture for new and current systems.

The HP 3000 may be gone from the supported platform list for HP, but there exists a small cadre of dedicated companies who know the HP 3000 and will help customers who must homestead to get the most from their systems. Over the past 10 years since HP’s announcement of its plan to phase out 3000 support, MBFoster has continued to support its solutions for data access and delivery. We have added products and services that help the HP 3000 application environment. Beyond the data, MBFoster is helping customers with application support — we have expertise to help write reports or modify business logic in COBOL, Fortran, Powerhouse, C, C++, and other legacy languages.

If customer does decide to move from an HP 3000, we have those services, too. We have helped customers moving data since 1985 and with transitioning applications since 2001. We also do a lot of work on planning the transition (contact us for our “build, buy or migrate” webinar ) as well as the decommissioning process: to transfer data to the new application, first for testing and then for production cutover — and then finally to preserve data for historic purposes and compliance reasons.

The word legacy means treasure. And in the case of the HP 3000 the treasure is huge – a highly reliable system that rarely fails (a mean time between reboots is most often measured in years) and reliably runs millions upon millions of transactions across a wide range of industries from education through local government, healthcare, manufacturing, transportation, pharmaceuticals, and retail. At MBFoster we are striving to sustain the HP 3000, and its legacy applications and data, as assets for our customers.

Whether it is a software product, migration project, data services, or project management, MBFoster makes it easy to deliver the right information to the right person at the right time. We work with our customers to streamline IT business operations to reduce costs, improve delivery, and grow revenues for our customers. To call us with questions contact us at 800-ANSWERS (800-267-9377) See us on Facebook at https://www.facebook.com/MBFosterAssociates or on the Web at www.MBFoster.com.

Data Quality at Kwantlen Polytechnic University

HomeArticles Posted by Birket Foster

In the fall of 2010, Warren Stokes, Registrar for Kwantlen Polytechnic University (https://www.kwantlen.ca/) gave the presentation Data Quality Assurance the Registrar’ Office at the Canadian Banner User Group Conference in Victoria, BC. Stokes’ presentation has many ideas that have broad application to improving data quality, process, and governance for any organization. This article shows measurable cost and administrative savings by implementing a proactive data quality process.

Background
Kwantlen Polytechnic University (Kwantlen) is a university located in British Columbia, Canada, with campuses in four locations: Surrey, Richmond, Langley, and Cloverdale. Kwantlen is primarily an undergraduate university with significant vocational (trades) training. In the fall of 2010 enrolment was just under 14,000 students across the four campuses. The Registrar’s office has 110 staff members, including admissions, records, scheduling, and front counter.

Kwantlen’s Enterprise Resource System (known as a Student Information System in the education market) is Banner, a third party application package used by many American and Canadian universities and colleges. Banner is used to track all student admission applications and the associated application approvals for admission to Kwantlen.

Challenges
In the fall of 2007, there were 7,400 new applications to Kwantlen. These applications created more than 17,000 internal records in the Banner system. Applicant and enrollment reports, critical to senior management, were taking 7-14 hours of staff time to prepare each and every week. Many of the data records had invalid data. For example, the city of Surrey was often spelled as Surrrey. When mail was sent to a student applicant with a misspelled city name, it was returned to the university as an invalid address.

Even worse the ranking system critical to admitting students at Kwantlen required redundant data to work. A free format field in each application record was used to record “special comments”. These comments had to be, and were manually reviewed. Reporting staff were overwhelmed trying to clean up the data prior to report creation.

The key problem:

Data creators were not accountable for the data they were creating

There were a number of side effects of the redundant and incorrect data. Students were not being admitted to Kwantlen in a timely manner. If they were admitted, incorrect data would prevent individual students from registering for the courses they wanted. Mail was returned on a regular basis which had to be dealt with on a case-by-case basis. Department moral was suffering due to the delays and mistakes.

Finding a Solution
The first part of the solution was to educate staff, fix systems, and improve data processes. The goal was to insure that an operator or data analyst could “tell the right story” two, five, or more years after the student had first been admitted to Kwantlen. The data in the Banner system should give a reasonable accounting of “what happened”.

Goals for the admission and enrolment reports were created. Key goals for these reports were:

Extracted and published in less than 30 minutes
100% accurate at time of extraction
No additional tuning (extract, transform, or load) prior to publication
Another task was to help the data creators become efficiently accountable for the data they were creating. To do this, a set of enrolment and application exception rules were created. These rules were used to create reports of invalid data on a daily or weekly basis and were automatically delivered to the data creators via email.

IT Aspects
The system Stokes and his team created at Kwantlen was a web based interface that allows users with the right permission to create, edit, and submit automated auditing rules. Each rule has a description, the database column headings to be included in the email message, the SQL statements to select invalid data, sort parameters, the frequency (daily or weekly), and an email address for delivery. For security reasons, only email addresses within the Kwantlen domain are accepted. Once the rule is verified it is submitted to an automated job scheduler (cron running on a flavor of UNIX in their case).

People, Process, and Change
Creating change in any organization is hard. Stokes and his team did a great job creating awareness of the value and of the need for change. Many IT systems and internal changes were made. Gaining acceptance of the changes among all the stakeholders required sustained effort.

When the daily data quality reports first started arriving they had many records to correct. Hundreds of records, in fact. It took time and resources for the various departments to spend the effort to start cleaning up the records. At first, it seemed like an insurmountable mountain to climb. Stokes encouraged all data owners to keep working on their reports. Initially, reports were run weekly, instead of daily. Over time, the data was cleaned up.

Old Habits Die Hard
After the initial clean up, data quality was good for a time. People go on holidays. Some change jobs or leave. New hires join the organization. After the initial success, Kwantlen experienced a return to significant invalid data, despite the daily audit checks. Some of the lessons learned were:

Really, you do have to fix it every day.
Bad data can happen even if you are on holidays.
Distribution lists are a better idea than specific email addresses.
A distribution list can have multiple people on it and is easy to change when someone goes on vacation, leaves, or someone new is hired.
Old habits really do come back.

Birthdays and Addresses
Two data fields that commonly have invalid data at Kwantlen are birthdates and mailing addresses. It turns out that in Canada there are very few 13-year old kids who legitimately apply to university so doing a basic sanity check on the birthdate can avoid errors. Invalid mailing addresses cause all sorts of problems, as real paper documents must be sent to newly registered students at Kwantlen. To simplify address verification a third-party verification service is used.

Not all invalid data is created by Kwantlen staff. Students can apply on-line and it is often the case that students filling out on-line forms make mistakes. In the end, the data owners in the Registrar’s office, must be responsible for insuring the accuracy of the data entered. The daily delivery of exception reports and daily cleanup by staff insure that the data is accurate.

Today
From hundreds of errors, there are now often no more than two or three in a given day. A total of sixteen daily auditing rules have been created and one weekly one. In 2010, there were 7,580 new students which caused 16,000 application records to be created. More students creating less work compared to the fall of 2007. Applicant and enrolment reports take 30 minutes to create compared to 7-14 hours previously.

Student applications are processed quickly. Applicants receive feedback faster. Key stakeholders are informed faster. With less work and effort for all involved. The data quality process saved time and money and helped improve the “customer experience” for the students.

This case study highlights the need for data management, data governance, and data quality. At MB Foster, we have helped numerous clients improve all three areas with solutions and experience that can make a measurable different to your organization.

About MB Foster
MB Foster has the people who understand how to deliver applications and data that match our customer’s business needs. For more than thirty years we have been the trusted advisor to customers ranging from local government to Global 100 enterprises. Our personalized service has proven time and again that we listen to customers and then find the right service strategy to provide solutions that solve their business and information technology challenges today and into the future.

Whether it is a software product, migration project, data services, or project management, we make it easy for customers to deliver the right information to the right person at the right time. We work with our customers to streamline their IT business operations to reduce costs, improve delivery, and grow revenues. Learn more at www.mbfoster.com.

This article is also available as a single PDF file.

Boost HP 3000 Effectiveness

HomeArticles Posted by Birket Foster

Many customers decide to stay with the HP 3000 platform because of the extraordinary HP 3000 reliability and the low cost of ownership. When working with both homesteading and migrating customers, we see a number of practices around application and data management that can provide benefits now and in the future.

Change Management

HP 3000 applications have been developed over many years. This makes the applications highly effective to organizations because they accurately reflect the business rules of the organization. Any application built over time faces challenges matching all source code to all running production code. Many HP 3000 sites do not have a formal change management process for their applications.

Change management typically is implemented in two parts: version control and governance. Did you know that you can put your HP 3000 source code under the control of a version control system such as Microsoft Visual Source Safe? Doing so allows an organization to identify and document all component pieces of each application. The effort and knowledge gained reduces the risk to the organization, by formalizing the knowledge that is often scattered around many individuals.

A governance process for the release of new versions of HP 3000 applications further reduces the risk of changes. A version control system helps, as it causes organizations to assign version numbers and identify all specific files that need to be changed to implement an application change. A formalized development, test and release governance process makes sure that IT, users, and management are all aligned when it comes to releasing new versions of the software. Not only does this reduce organizational risk on numerous fronts, it sets up an organization for future change. We have yet to see a successful migration that did not have strong change management and governance.

Data Management

A second area where HP 3000 sites can improve performance is in data management. Redundant data can cost organizations millions of dollars every year. As many HP 3000 databases have been developed over decades, they often have large amounts of duplicate data. We have observed and participated in cases where rationalizing both duplicate data and the amount of historic data has resulted in large space reductions while speeding up batch processing by over ten times.

Another major focus area for HP 3000 improvement is cleaning up data. We advocate that you setup simple job streams that check for bad data nightly or weekly and report it directly to affected users. At a recent conference, the Registrar of a major Canadian college reported that they now check all data nightly and email suspected bad data directly to the users responsible for entering it.

Removing bad data reduces the amount of data in your database and insures that people who depend on this data make the right decisions. In our migration work, it is common to have to spend a lot of time cleansing data before migrating it, a task that is minimized if your data is already clean.

We have seen many HP 3000 sites leverage a data mart. A data mart provides an alternative view of your HP 3000 data in a popular SQL database such as MS SQL Server. While introducing redundant data, the benefits outweigh the costs, especially when the data is transformed as part of the replication process. Using MS SQL Server allows HP 3000 sites to hire and train experts in the latest technologies speeding delivery and lowering costs. Some sites are building all new functionality on top of MS SQL Server using bidirectional database replication technology. Over time the business becomes less dependent on the HP 3000 application. It also insures that if you do migrate, the majority of your interface points are to SQL Server, which does not have to be migrated.

These are two functional areas are where we have seen HP 3000 sites increase there effectiveness. Many of these ideas can be introduced as a project or one application at a time, letting you spread out the implementation cost. The biggest hurdle is making the commitment to change the way you do things now to increase your ability to execute in the future.

Mitigating Risks in the HP 3000 Environment

HomeArticles Posted by Birket Foster

As a software vendor with licensed customers in the HP3000 market, I am astounded by the number of IT shops that have not clearly communicated to their senior management the issues associated with HP’s December 31, 2010 end of hardware and software support. You see, senior management is often more concerned with the budget than with the risks involved, since financial analysis is something the company is measured upon. However, for all the companies I visit every year (and there have been hundreds), I have yet to see a company where the Microsoft Windows budget is less than the HP3000 budget (service bureaus aside). Windows always costs more, and yet that desktop environment does little to run the applications required to run the business.

Budgets are good to monitor. But you must also remain aware of risks, and monitor and plan for them, in your mission-critical 3000 environment.

So here’s a little end of summer exercise. Let’s think through a scenario for your HP3000. Something goes wrong with a disc controller rendering your storage useless what is your plan for getting things back on track? Yes, that would be called a Business Continuity, Disaster Recovery situation. What’s more, you should not only have such a plan readily available, but also have a management-approved measure called Mean-Time To Recovery of Operation (MTTRO) associated with it. What this metric consists of are the costs for the loss or impairment of a critical resource as well as the time-frames involved for different kinds of incidents. Each scenario should be played out with the costs involved and a discussion of what is acceptable downtime for that situation. (For some ideas, see the Wikipedia entry on MTTRO.)

Think of the best possible scenario. The downtime occurs right after a backup, with spare parts and the right team members present on site to recover from the failure. How long will it take you to recover? What will the downtime cost you while the HP3000 is not available? You will need to know if that cost and the length of downtime is acceptable to your senior management team.

Okay, so let’s look at the impact of a crash on Friday afternoon when the HP3000 was backed up last Saturday (you do verify your backup tapes, right?) You have a full backup from last Saturday and daily backups from Monday through Thursday. The spare parts are not on site, and you have to contact your provider to get the parts and a skilled technician to the site, and then you can start restoring your hardware and application environments. How long will it take to restore all the data, applications and the whole system?

Is there a plan with priority order for recovery how would you know what data was lost from today? How can you recover it — are there any transactions or data likely to be unrecoverable (for example, Web transactions)? Was does four hours of downtime cost at the maximum — and what about eight hours, 2 days, or a week? Think about what could you do to mitigate the risks. What does it cost to shorten your MTTRO? You need to determine the cost of downtime per hour or per incident worth insuring against. Are you making a conscious decision not to make provisions to mitigate the risk?

At this point in the HP3000 market life cycle, it is worth understanding such a roadmap, your plan for applications, and what the high-level picture is for maintaining your 3000 environment. At my corporation we call this a sustainability plan.The plan looks at the entire environment — application, tools, people and skill sets (both users and IT personnel). It estimates the sustainability and the readiness for training and knowledge transfer capability that exists within the corporation. From this sustainability plan you can see where the risks might be mitigated. More information can be found in our white paper (PDF) The Sustainability Plan. You can also use the plan to support the environment for an expected period of time.

Trust in the Future, Through Experience

HomeArticles Posted by Birket Foster

We think of Birket Foster as the community’s futurist. HP has made it clear to the community that the future of the 3000 won’t include Hewlett-Packard. Since the company is now counting down its last two years of support, we wanted to look beyond that coming initial year of post-HP operations. Seeing into that future, with more migrations and fewer homesteaders, seemed a lively exercise for Birket Foster, leader of the HP Platinum Migration Partner MB Foster and a forward thinker. His company has been in this market since 1977, and a Migration Partner since 2002. He wanted to envision the 3000 market 10 years after that date.

We talked about the world of 2012, three years from now and well away from HP’s influence on 3000 ownership and migration. MB Foster is sharpening its message this year to reflect its business beyond 3000 expertise. In the years to come the company is booked to help manage migrated applications and environments running for customers MB Foster has migrated. Foster calls this mission “providing the knowledge and experience to earn your trust.” We interviewed him just after he returned from fresh field work in the 3000’s e-commerce community.

Now that the HP MPE/iX lab has closed, will it affect the timeline for migrations?

If you’re already determined to stay on the 3000, the closing of the lab means nothing. The HP lab was doing less and less over the last five years anyway. It’s really about the applications, not about the 3000’s technology.

The correct answer to the question “When do I migrate” is “when the rest of the world changes over to the next major new technology.” When that technology gets introduced, and it cannot be incorporated into the 3000 in any way, then you end up with the 3000 unable to integrate.

I sat in a meeting with a CFO this month who said, “I’m going to be the last guy standing in the management team. Everybody is moving except me, because I’m the youngster. So guess what? I don’t want this on my watch, so I want to get the process ready. I’d like to start the process to mitigate the risk.” The people in the IT trenches don’t always understand that from a risk-mitigation point of view, management may vote differently. In this company, they brought somebody back from retirement to run the 3000. Does that tell you anything?

Seemingly small things can impact the future of 3000 transitions. Can you think of an element that’s been overlooked that will shape the future of the marketplace?

Availability of people who know how to support the applications. There are lots of hardware guys. It’s not just the people in IT, but also the people on the business side of the world. The last person in accounting who knows how the accounting system works – when he leaves, you’ll have to replace that system. That’s one of the biggest risks people are facing, whether they want to admit it or not.

It’s 2012. How much of the market has made the move by now? Who’s still on the 3000, who’s moved, and why?

Maybe 10 percent of the original installed base is left. Even today there are a lot of machines out there, but I know of companies that have plans afoot to get themselves out of where they currently are. That might not be by 2012, but it’s going to be pretty close to that time. For example, anybody who has a credit card application right now needs to be able to do certain kinds of encryption and protection for credit card numbers. Some applications didn’t handle that very well. If you just got told that your Visa, MasterCard and American Express merchant rights are going to be revoked if you don’t get onto the new application, I guess you don’t have a choice, unless you want to close the doors.

In the healthcare sector, there are new HIPAA regulations that make you ensure you can see who looked at a patient file. That’s often not going to have been built into the 3000 application.

It’s going to get harder over the next three years to put out a help wanted call that says “Wanted: HP 3000 programmer.” You’re more likely to get more response if want a Windows programmer, or a .NET programmer. Even a Java programmer, although we’ll see what Oracle does with Java.

I think you’ll be stuck with the small guys on the 3000. The big guys all will have moved, because they all have some kind of accountability to banking. Banks will start pushing down the chain on how much risk they have in their client base.

In fact, there are banks already doing that. Companies are having their risk profiles revised when they apply for their annual line of credit to cover payrolls or big inventory buys. Even though you’ve done business for 20 years, there’s somebody at the bank who’s going to look at you to see whether you’re a risk after all. During that process they may look at what’s critical to your business. If that’s an HP 3000, at some point somebody’s going to recognize it’s not HP’s price list anymore, so it represents a risk.

Birket Foster is CEO of MB Foster, the application specialists who deliver applications and data that match your business needs. www.mbfoster.com