Wednesday, July 31, 2019
Business Continuity Planning
Though interruptions to business can be due to major natural disasters such as fires, floods, earthquakes and storms; or due to man-made disasters such as wars, terrorist attacks and riots; it is usually the more mundane and less sensational disasters such as power failure, equipment failure, theft and sabotage that are the causes behind disruptions to business.A Business Continuity Plan or ââ¬Å"Continuity of Business Planning (CoB Plan) â⬠¦ defines the process of identification of the applications, customers (internal & external) and locations that a business plans to keep functioning in the occurrence of such disruptive events, as well the failover processes & the length of time for such support. This encompasses hardware, software, facilities, personnel, communication links and applicationsâ⬠(MphasiS, 2003).A Business Continuity Plan is formulated in order to enable the organization to recover from a disaster with the minimum loss of time and business by restoring its critical operations quickly and smoothly. The Business Continuity Plan should be devised in such a way that it involves not only the recovery, resumption and maintenance of only the technology components but also of the entire business. Recovery of only the ICT systems and infrastructure may not always imply the full restoration of business operations.The Business Recovery Planning at XE therefore envisages the consideration of all risks to business operations that may include not only ICT applications and infrastructure but also directly impact on other business processes. After conducting an extensive Business Impact Analysis (BIA), Risk Assessment for XE was carried out by evaluating the assumptions made in BIA under various threat scenarios. Threats were analyzed on the basis of their potential impact to the organization, its customers and the financial market it is associated with.The threats were then prioritized depending on their severity. The following threats were identifi ed for XE: 1. Natural disasters such as floods, fires, storms, earthquakes, extreme weather, etc. 2. Man-made disasters such as terrorist attacks, wars and riots. 3. Routine threats that include: a. Non-availability of critical personnel b. Inaccessibility of critical buildings, facilities or geographic regions c. Malfunctioning of equipment or hardware d. Inaccessibility or corruption of software and data due to various reasons including virus attacks e. Non-availability of support servicesf. Failure of communication links and other essential utilities such as power g. Inability to meet financial liquidity requirements, and h. Unavailability of essential records. Organizing the BCP Team The first and most important step in developing a successful disaster recovery plan is to create management awareness. The top-level management will allocate necessary resources and time required from various areas of the organizations only if they understand, realize and support the value of disast er recovery. The management has to also accord approval for final implementation of the plan.The BCP team therefore has to have a member from the management who can not only provide the inputs from the management but also apprise the management and get its feedback. Besides these, each core or priority area has to be represented by at least one member. Finally, there has to an overall Business Continuity Plan coordinator who is responsible not only for co-ordination but also for all other aspects of BCP implementation such as training, updating, creating awareness, testing, etc. The coordinator usually has his or her own support team.XEââ¬â¢s Business Continuity Planning team would therefore comprise representatives from the management and each of the core or priority areas, and would be held together by the BCP coordinator. Even in the case of outsourcing of the BCP, it is necessary for the management and nominated members from the core or priority areas to be closely associated with each step of the planning process. Crucial Decisions The key decisions to be made in formulating the Business Continuity Plan for XE were associated with the individual steps that were undertaking in making the BCP.The first step of Business Impact Analysis (BIA) involved making a work flow analysis to assess and prioritize all business functions and processes including their interdependencies. At this stage, the potential impact of business disruptions was identified along with all the legal and regulatory requirements for XEââ¬â¢s business functions and processes. Based on these, decisions on allowable downtime and acceptable level of losses were taken. Estimations were made on Recovery Time Objectives (RTOs), Recovery Point Objectives (RPOs) and recovery of the critical path.The second step of Business Continuity Planning comprised of risk assessment during which business processes and the assumptions made in the course of BIA were evaluated using various threat scenario s. The decisions made at this stage included the threat scenarios that were to be adopted, the severity of the threats and finally identification the risks that were to be considered in the BCP based on the assessments made. The next step of Risk Management involved drawing up of the plan of action with respect to the various risks.This was the stage at which the actual Business Continuity Plan was drawn up, formulated and documented. Crucial decisions such as what specific steps whould be taken during a disruption, the training programs that should be organized to train personnel in implementation of the BCP, and the frequency of updating and revisions that would be required were taken at this stage. Finally, in the Risk Monitoring and Testing stage, decisions regarding the suitability and effectiveness of the BCP were taken with reference to the initial objectives of the Business Continuity Plan. Business Rules and System Back-upsMy friend works for the Motor Vehicles department t hat issues driving licenses for private and commercial vehicles. Applicants for any license initially come and deposit a fee. The particulars of the specific applicant along with photograph and biometrics in the form of finger prints are then entered into the database. Thereafter, the applicant undergoes a medical test, the results of which are again entered into the database of the system. If approved in the medical test, the applicant has to appear for an initial theoretical test on driving signs and rules and regulations.If the applicant passes the test, he or she is given a Learnerââ¬â¢s License. The applicant then comes back for the practical driving test after a month, and is awarded the driving license if he or she is able to pass the test. New additions are made to the database of the driving license system at every stage of this workflow. Though the tests for the learnerââ¬â¢s license and driving license are held three days in a week, an individual can apply any day o f the five working days of the department. People also come for renewal of driving licenses.Driving licenses are usually issued for a period of one to five years depending on the age and physical condition of the applicant. In the case of commercial vehicles, an applicant first has to obtain a trainee driving license and work as an apprentice driver for two years before he or she becomes eligible for a driving license to drive a commercial vehicle. Moreover, a commercial driving license is issued only for a year at time, and the driver has to come back for evaluation and medical tests every year.The number and frequency of transactions are therefore much higher for commercial vehicles. As is evident from the business rules of the department, data is added and modified frequently for a specific applicant during the process of the initial application. Subsequently, data is again added to or the database modified after an interval of one month for the same applicant. Thereafter, fresh data is added to the database or the database modified only after a period of five years when the applicant comes back for renewal.However, there is always the possibility that someone loses or misplaces his or her license and comes back to have a duplicate issued. But when the scenario of multiple applicants who can come in at any day for fresh, duplicate or renewal of licenses is considered, it becomes evident that transactions are not periodic or time bound but are continuous. Transactions can happen any time during working hours resulting in changes to the database of the system. Taking only complete backup of the system would not be the optimal backup solution under the given circumstances.Whatever frequency of complete backup is adopted, the chance of losing data will be very high in the case of database failure or any other disastrous event that results system failure or corruption. Moreover, taking complete backup of the system very frequently would be a laborious and cumber some exercise. The ideal backup method in this case would be incremental backup in which backup is taken of only the data that is added or modified the moment it is added or modified, and a complete backup is taken at a periodic frequency.Under the situation, the Motor Vehicles Department has opted for continuous incremental backup with a complete backup taken at the end of the day. As a Business Continuity Plan measure, the department uses a remote backup mirroring solution that provides host-based, real-time continuous replication to a disaster recovery site far away from their servers over standard IP networks. This mirroring process uses continuous, asynchronous, byte-level replication and captures the changes as they occur. It copies only changed bytes, therefore reducing network use, enabling quicker replication and reducing latency to a great extent.This remote mirroring solution integrates with the existing backup solutions, and can replicate data to perform remote backups a nd can take snapshots at any time without having any impact on the performance of the production severs. It replicates over the available IP network, both in LAN and WAN, and has been deployed without any additional cost. This remote mirroring solution accords the department the maximum possible safeguard against data loss from failures and other disasters. Database Processing Efficiency versus Database Storage EfficiencyThough storage costs as such has decreased dramatically over the years, the controversy between database processing efficiency and database storage efficiency continues to be an issue because the overall performance of a systems is affected by the way data is stored and processed. In other words, even though the volume of storage space available may no longer be a constraint financially and physically, the way this space is utilized has an impact on the database processing efficiency which in turns affects the overall performance of the application or the system.Und er the present circumstances, though it is possible to compromise on the side of database storage efficiency to derive greater database performance efficiency and thus improve the overall performance of the system, achieving optimization of the overall performance of a system requires striking a fine balance between database processing and database storage efficiency. There can be many tradeoffs between data processing speed and the efficient use of storage space for optimal performance of a system. There is no set rule on which tradeoffs to adopt, and differs according to the practical data creation, modification and flow of the system.Certain broad guidelines can however be followed in order to increase the overall utility of the database management system. Examples of such guidelines are to be found in the case of derived fields, denormalization, primary key and indexing overheads, reloading of database and query optimization. Derived fields Derived field are the fields in which data is obtained after the manipulation or operation of two or more original fields or data. The issue at stake is whether the data should be stored only in the original form or as the processed data in derived field also.When the data is stored only in the original form, the derived field is calculated as and when required. It is obvious that storing derived data will require greater storage space but the processing time will be comparatively less i. e. storage efficiency will be low whereas processing efficiency becomes higher. However, the decision on whether to store derived fields or not depend on other considerations such as how often the calculated data is likely to change, and how often the calculated data will be required or used. An example will make will serve to make matters more clear.A university studentââ¬â¢s grade point standing is a perfect example of the derived field. For a specific class, a studentââ¬â¢s grade point is obtained by multiplying the points cor responding to the grade of the student by the number of credit hours associated with the course. The points or the grade and the number of credit hours are therefore the original data, by multiplying which we get the grade point or the derived field. The decision on whether to store the derived field or not, will in this case depend on how often the grade point of a student is likely to change, and how often the studentââ¬â¢s grade points are actually required.The grades of a student who has already graduated is unlikely to undergo nay more changes, whereas the grades of a student still studying in the university will change at regular frequency. In such a case, storing the grade points of an undergraduate would be more meaningful than storing the grade points of a student who has already graduated. Again if the undergraduateââ¬â¢s grades are reported only once a term, then it may not be worth it to store grade points as derived fields. The significance of the matter is realiz ed when we consider a database of thousand of students.The tradeoff in this case is between storing the grade points as derived fields and gaining on database processing efficiency and losing out on database storage efficiency on one hand; and not storing the derived fields and gaining on storage efficiency but losing out processing efficiency on the other. Denormalization Denormalization is a process by which the number of records in a fully normalized database can be considerably reduced even while adhering to the rule of the First Normal Form that states that the intersection of any row with any column should result in a single data value.The process of cutting down multiple records into a single record is applicable only in certain specific cases in which the number and frequency of transaction is known. Normalization of a database is required to maintain the accuracy and integrity of the data which in turn leads to validity of the reports generated by the system and the reliabi lity of the of the decisions based on the system. Denormalization, if done randomly can upset the balance of the database while economizing on storage space. The dilemma of Indexing Unplanned use of primary key has a telling negative affect on the database storage efficiency.Many database systems resort to setting index on a field. When a field is index, the system sets a pointer to that particular field. The pointer helps in processing the data much faster. However, indexing fields also results in the system storing and maintaining data but also information or data about the storage. The question therefore again boils down to deciding on whether to achieve higher processing efficiency by compromising storage efficiency or to enable higher storage capabilities at the cost of processing efficiency.Sorting the data periodically is one way of overcoming the dilemma of indexing. However, sorting itself is highly taxing on the resources of a system. Moreover, in large organizations with millions of data, a sort may take even up to hours during which all computer operations remain suspended. Other factors Storage efficiency and processing efficiency are also interdependent in other ways. The deletion of data without reloading the database from time to time may result in the deleted data actually not being removed from the database.The data is simply hidden by setting a flag variable or marker. This results not only in low storage efficiency but also in low processing efficiency. Reloading a database removes the deleted data permanently from the database and leads to smaller amount of data and a more efficient use if resources thereby boosting processing efficiency. Similarly, haphazard coding structures can impact negatively both on the storage efficiency and the processing efficiency of a database. Completely ignoring storage efficiency while prioritizing processing efficiency, can never lead to database optimization.Conversely, optimization can also never be achie ved by an over emphasis on storage efficiency. The objective is to strike the right balance. The interrelationships between database storage efficiency and database processing efficiency therefore keep the controversy between the two alive in spite of a dramatic decrease in storage costs over the years. References -01 MphasiS Corporation, 2003, MphasiS Disaster Recovery/Business Continuity Plan, [Online] Available. http://www. mphasis. com [June 27, 2008]
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.