Learning Ashram

Software Project Management & Quality Assurance

MC0084-01 – Software Project Management & Quality Assurance

1. Explain the following theoretical concepts in the context of IT and Organizational Structures:
A) Hierarchical Organizational Structure
Ans:- A hierarchical organization is an organizational structure where every entity in the organization, except one, is subordinate to a single other entity. This arrangement is a form of a hierarchy. In an organization, the hierarchy usually consists of a singular/group of power at the top with subsequent levels of power beneath them. This is the dominant mode of organization among large organizations; most corporations, governments, and organized religions are hierarchical organizations with different levels of management, power or authority. For example, the broad, top-level overview of the general organization of the Catholic Church consists of the Pope, then the Cardinals, then the Archbishops, and so on. Members of hierarchical organizational structures chiefly communicate with their immediate superior and with their immediate subordinates. Structuring organizations in this way is useful partly because it can reduce the communication overhead by limiting information flow; this is also its major limitation
A hierarchy is typically visualized as a pyramid, where the height of the ranking or person depicts their power status and the width of that level represents how many people or business divisions are at that level relative to the whole—the highest-ranking people are at the apex, and there are very few of them; the base may include thousands of people who have no subordinates). These hierarchies are typically depicted with a tree or triangle diagram, creating an organizational chart or organigram. Those nearest the top have more power than those nearest the bottom, and there being fewer people at the top then at the bottom. As a result, superiors in a hierarchy generally have higher status and command greater rewards than their subordinates.
Common models
All governments and most companies have similar structures. Traditionally, the monarch was the pinnacle of the state. In many countries, feudalism and manorialism provided a formal social structure that established hierarchical links at every level of society, with the monarch at the top.
In modern post-feudal states the nominal top of the hierarchy still remains the head of state, which may be a president or a constitutional monarch, although in many modern states the powers of the head of state are delegated among different bodies. Below the head, there is commonly a senate, parliament or congress, which in turn often delegate the day-to-day running of the country to a prime minister. In many democracies, the people are considered to be the notional top of the hierarchy, over the head of state; in reality, the people's power is restricted to voting in elections.
In business, the business owner traditionally occupied the pinnacle of the organization. In most modern large companies, there is now no longer a single dominant shareholder, and the collective power of the business owners is for most purposes delegated to a board of directors, which in turn delegates the day-to-day running of the company to a managing director or CEO. Again, although the shareholders of the company are the nominal top of the hierarchy, in reality many companies are run at least in part as personal fiefdoms by their management; corporate governance rules are an attempt to mitigate this tendency.
Studies of hierarchical organizations
The organizational development theorist Elliott Jacques identified a special role for hierarchy in his concept of requisite organization.
The iron law of oligarchy, introduced by Robert Michels, describes the inevitable tendency of hierarchical organizations to become oligarchic in their decision making.
Hierarchiology is the term coined by Dr. Laurence J. Peter, originator of the Peter Principle described in his humorous book of the same name, to refer to the study of hierarchical organizations and the behavior of their members.
Having formulated the Principle, I discovered that I had inadvertently founded a new science, hierarchiology, the study of hierarchies. The term hierarchy was originally used to describe the system of church government by priests graded into ranks. The contemporary meaning includes any organization whose members or employees are arranged in order of rank, grade or class. Hierarchiology, although a relatively recent discipline, appears to have great applicability to the fields of public and private administration.The IRG Solution - hierarchical incompetence and how to overcome it argued that hierarchies were inherently incompetent, and were only able to function due to large amounts of informal lateral communication fostered by private informal networks.
Criticism and alternatives
In the work of diverse theorists such as William James (1842-1910), Michel Foucault (1926-1984) and Hayden White, important critiques of hierarchical epistemology are advanced. James famously asserts in his work "Radical Empiricism" that clear distinctions of type and category are a constant but unwritten goal of scientific reasoning, so that when they are discovered, success is declared. But if aspects of the world are organized differently, involving inherent and intractable ambiguities, then scientific questions are often considered unresolved. A hesitation to declare success upon the discovery of ambiguities leaves heterarchy at an artificial and subjective disadvantage in the scope of human knowledge. This bias is an artifact of an aesthetic or pedagogical preference for hierarchy, and not necessarily an expression of objective observation.
Hierarchies and hierarchical thinking has been criticized by many people, including Susan McClary and one political philosophy which is vehemently opposed to hierarchical organization: anarchism is generally opposed to hierarchical organization in any form of human relations. Heterarchy is the most commonly-proposed alternative to hierarchy and this has been combined with responsible autonomy by Gerard Fairtlough in his work on Triarchy theory.
Amidst constant innovation in information and communication technologies, hierarchical authority structures are giving way to greater decision-making latitude for individuals and more flexible definitions of job activities and this new style of work presents a challenge to existing organizational forms, with some research studies contrasting traditional organizational forms against groups that operate as online communities that are characterized by personal motivation and the satisfaction of making one's own decision

B) Flat Organizational Structure
Ans:- Flat organization (also known as horizontal organization) refers to an organizational structure with few or no levels of intervening management between staff and managers. The idea is that well-trained workers will be more productive when they are more directly involved in the decision making process, rather than closely supervised by many layers of management.
This structure is generally possible only in smaller organizations or individual units within larger organizations. When they reach a critical size, organizations can retain a streamlined structure but cannot keep a completely flat manager-to-staff relationship without impacting productivity. Certain financial responsibilities may also require a more conventional structure. Some theorize that flat organizations become more traditionally hierarchical when they begin to be geared towards productivity.
The flat organization model promotes employee involvement through a decentralized decision making process. By elevating the level of responsibility of baseline employees, and by eliminating layers of middle management, comments and feedback reach all personnel involved in decisions more quickly. Expected response to customer feedback can thus become more rapid. Since the interaction between workers is more frequent, this organizational structure generally depends upon a much more personal relationship between workers and managers. Hence the structure can be more time-consuming to build than a traditional bureaucratic/hierarchical model.
C) Matrix Organizational Structure
Ans:- The matrix structure groups employees by both function and product. This structure can combine the best of both separate structures. A matrix organization frequently uses teams of employees to accomplish work, in order to take advantage of the strengths, as well as make up for the weaknesses, of functional and decentralized forms. An example would be a company that produces two products, "product a" and "product b". Using the matrix structure, this company would organize functions within the company as follows: "product a" sales department, "product a" customer service department, "product a" accounting, "product b" sales department, "product b" customer service department, "product b" accounting department. Matrix structure is amongst the purest of organizational structures, a simple lattice emulating order and regularity demonstrated in nature.
• Weak/Functional Matrix: A project manager with only limited authority is assigned to oversee the cross- functional aspects of the project. The functional managers maintain control over their resources and project areas.
• Balanced/Functional Matrix: A project manager is assigned to oversee the project. Power is shared equally between the project manager and the functional managers. It brings the best aspects of functional and projectized organizations. However, this is the most difficult system to maintain as the sharing power is delicate proposition.
• Strong/Project Matrix: A project manager is primarily responsible for the project. Functional managers provide technical expertise and assign resources as needed.
Among these matrixes, there is no best format; implementation success always depends on organization's purpose and function.

2. Explain the following theoretical concepts in the context of Budget estimations:
A) Capital budgeting
Ans:- Capital budgeting (or investment appraisal) is the planning process used to determine whether a firm's long term investments such as new machinery, replacement machinery, new plants, new products, and research development projects are worth pursuing. It is budget for major capital, or investment, expenditures.[1]
Many formal methods are used in capital budgeting, including the techniques such as
• Accounting rate of return
• Net present value
• Profitability index
• Internal rate of return
• Modified internal rate of return
• Equivalent annuity
These methods use the incremental cash flows from each potential investment, or project Techniques based on accounting earnings and accounting rules are sometimes used - though economists consider this to be improper - such as the accounting rate of return, and "return on investment." Simplified and hybrid methods are used as well, such as payback period and discounted payback period.
Ans:- The equivalent annuity method expresses the NPV as an annualized cash flow by dividing it by the present value of the annuity factor. It is often used when assessing only the costs of specific projects that have the same cash inflows. In this form it is known as the equivalent annual cost (EAC) method and is the cost per year of owning and operating an asset over its entire lifespan.
It is often used when comparing investment projects of unequal lifespans. For example if project A has an expected lifetime of 7 years, and project B has an expected lifetime of 11 years it would be improper to simply compare the net present values (NPVs) of the two projects, unless the projects could not be repeated.
Alternatively the chain method can be used with the NPV method under the assumption that the projects will be replaced with the same cash flows each time. To compare projects of unequal length, say 3 years and 4 years, the projects are chained together, i.e. four repetitions of the 3 year project are compare to three repetitions of the 4 year project. The chain method and the EAC method give mathematically equivalent answers.
The assumption of the same cash flows for each link in the chain is essentially an assumption of zero inflation, so a real interest rate rather than a nominal interest rate is commonly used in the calculations.
Ans:- Software practitioners often hear, ?We cannot quantify the benefits of SPI, because it is not possible to express indirect benefits such as customer satisfaction or personnel motivation in financial numbers,? as an argument against calculating the return on investment for software process improvement. An approach is presented detailing how to measure benefits, as are case-study findings from pragmatic ROI analyses for SPI and an overview of literature on the ROI of SPI.performance measure used to evaluate the efficiency of an investment or to compare the efficiency of a number of different investments. To calculate ROI, the benefit (return) of an investment is divided by the cost of the investment; the result is expressed as a percentage or a ratio.

The return on investment formula:

Return on investment is a very popular metric because of its versatility and simplicity. That is, if an investment does not have a positive ROI, or if there are other opportunities with a higher ROI, then the investment should be not be undertaken.
Investopedia explains Return On Investment - ROI
Keep in mind that the calculation for return on investment and, therefore the definition, can be modified to suit the situation -it all depends on what you include as returns and costs. The definition of the term in the broadest sense just attempts to measure the profitability of an investment and, as such, there is no one "right" calculation.

For example, a marketer may compare two different products by dividing the revenue that each product has generated by its respective marketing expenses. A financial analyst, however, may compare the same two products using an entirely different ROI calculation, perhaps by dividing the net income of an investment by the total value of all resources that have been employed to make and sell the product.

This flexibility has a downside, as ROI calculations can be easily manipulated to suit the user's purposes, and the result can be expressed in many different ways. When using this metric, make sure you understand what inputs are being used.

D) Payback models
Ans:- Capital budgeting decisions are crucial to a firm's success for several reasons. First, capital expenditures typically require large outlays of funds. Second, firms must ascertain the best way to raise and repay these funds. Third, most capital budgeting decisions require a long-term commitment. Finally, the timing of capital budgeting decisions is important. When large amounts of funds are raised, firms must pay close attention to the financial markets because the cost of capital is directly related to the current interest rate.
The need for relevant information and analysis of capital budgeting alternatives has inspired the evolution of a series of models to assist firms in making the "best" allocation of resources. Among the earliest methods available were the payback model, which simply determines the length of time required for the firm to recover its cash outlay, and the return on investment model, which evaluates the project based on standard historical cost accounting estimates. The next group of models employs the concept of the time value of money to obtain a superior measure of the cost/benefit trade-off of potential projects. More current models attempt to include in the analysis non-quantifiable factors that may be highly significant in the project decision but could not be captured in the earlier models.
This article explains budgeting models currently being used by large companies, the division responsible for evaluating capital budgeting projects, the most important and most difficult stages in the capital budgeting process, the cost of capital cutoff rate, and the methods used to adjust for risk. A possible rationale is provided for the choices that firms are making among the available models. The discussion identifies difficulties inherent in the traditional discounted cash flow models and suggests that these problems may have led some firms to choose the simpler models.
Capital budgeting decisions are extremely important and complex and have inspired many research studies. In an in-depth study of the capital budgeting projects of 12 large manufacturing firms, Marc Ross found in 1972, that although techniques that incorporated discounted cash flow were used to some extent, firms relied rather heavily on the simplistic payback model, especially for smaller projects. In addition, when discounted cash flow techniques were used, they were often simplified. For example, some firms' simplifying assumptions include the use of the same economic life for all projects even though the actual lives might be different. Further, firms often did not adjust their analysis for risk (Ross, 1986).
In 1972 Thomas P. Klammer surveyed a sample of 369 firms from the 1969 Compustat listing of manufacturing firms that appeared in significant industry groups and made at least $1 million of capital expenditures in each of the five years 1963-1967. Respondents were asked to identify the capital budgeting techniques in use in 1959, 1964, and 1970. The results indicated an increased use of techniques that incorporated the present value (Klammer, 1984).
James Fremgen surveyed a random sample of 250 business firms in 1973 that were in the 1969 edition of Dun and Bradstreet's Reference Book of Corporate Management. Questionnaires were sent to companies engaged in manufacturing, retailing, mining, transportation, land development, entertainment, public utilities and conglomerates to study the capital budgeting models used, stages of the capital budgeting process, and the methods used to adjust for risk. He found that firms considered the internal rate of return model to be the most important model for decision-making. He also found that the majority of firms increased their profitability requirements to adjust for risk and considered defining a project and determining the cash flow projections as the most important and most difficult stage of the capital budgeting process (Fremgen, 1973).
In 1965, J William Petty, David P. Scott, and Monroe M. Bird examined responses from 109 controllers of 1971 Fortune 500 (by sales dollars) firms concerning the techniques their companies used to evaluate new and existing products lines. They found that internal rate of return was the method preferred for evaluating all projects. Moreover, they found that present value techniques were used more frequently to evaluate new product lines than existing product lines (Petty, 1975)
Laurence G. Gitman and John R. Forrester Jr. analyzed the responses from 110 firms who replied to their 1977 survey of the 600 companies that Forbes reported as having the greatest stock price growth over the 1971-1979 period. The survey containing questions concerning capital budgeting techniques, the division of responsibility for capital budgeting decisions, the most important and most difficult stages of capital budgeting, the cutoff rate and the methods used to assess risk. They found that the discounted cash flow techniques were the most popular methods for evaluating projects, especially the internal rate of return. However, many firms still used the payback method as a backup or secondary approach. The majority of the companies that responded to the survey indicated that the Finance Department was responsible for analyzing capital budgeting projects. Respondents also indicted that project definition and cash flow estimation was the most difficult and most critical stage of the capital budgeting process. The majority of firms had a cost of capital or cutoff rate between 10 and 15 percent, and they most often adjusted for risk by increasing the minimum acceptable rate of return on capital projects (Gitman, 1977).
In 1981, Suk H. Kim and Edward J. Farragher surveyed the 1979 Fortune 100 Chief Financial officers about their 1975 and 1979 usage of techniques for evaluating capital budgeting projects. They found that in both years, the majority of the firms relied on a discounted cash flow method (either the internal rate of return or the net present value) as the primary method and the payback as the secondary method (Suk, 1981).
Capital Budgeting Techniques
Several models are commonly used to evaluate capital budgeting projects: the payback, accounting rate of return, present value, internal rate of return, profitability index models and others.
The payback model measures the amount of time required for cash income from a project to exactly equal the initial investment. The accounting rate of return is the ratio of the project's average after-tax income to its average book value.
Academicians criticize both the payback and the accounting rate of return models because they ignore the time value of money and the size of the investment.
When the net present value model is used, the firm discounts the projected income from the project at the firm's minimum acceptable rate of return (hurdle rate). The net present value is the difference between the present value of the income and the cost of the project. If the net present value of the project is positive, the project is accepted; conversely, if the net present value is negative, the project is rejected. The internal rate of the return model equates the cost of the project to the present value of the project. The net present value and the internal rate of return models overcome the time value of money deficiency; however, they fail to consider the size of a project.
Furthermore, the payback model does not consider returns from the project after the initial investment is recovered. The profitability index is a ratio of the project's value to its initial investment. The firm then selects the project with the highest profitability index and continues to select until the investment budget is exhausted. The profitability index overcomes both the time value of money and the size deficiencies.
Some decision makers have criticized the net cash flow method because they simply do not agree with the decisions indicated by the results from the models. In some cases, managers are reluctant to make important decisions based on uncertain estimates of cash flows far in the future. Thus, they consider only near-term cash flows or are distrustful of the output of the models. In others, managers may have predetermined notions about which projects to adopt and may, therefore, "massage" the numbers to achieve the result they desire. Thus, in many cases, the negative results occurred because of inappropriate input into the models, rather than from the models themselves. One area of particular concern is the choice of discount rate. For example, Robert S. Raplan and Anthony A. Atkinson suggested, in 1985, that users often employ too high a discount rate, either by choosing too high a cost of capital or by using a higher rate as an adjustment for risk. An inappropriately high discount rate yields too high a hurdle rate or too low a net present value and thus a negative signal about the project. They recommend using a discount rate that reflects the firm's true cost of capital according to sound theory of finance. Moreover, they say that risk should be analyzed by modeling multiple scenarios (best to worst cases) in a manner similar to flexible budgeting. Finally, when the discount rate incorporates inflation, the user must be careful to adjust future cash flows for inflation as well (Kaplan, 1985).
Other areas of concern in using capital budgeting models involve appropriate comparisons. Decision makers sometimes consider a new project as discrete and more independent of the rest of operations than it really is. They may assume that, without the project, conditions will remain just as they have been while, in reality, the environment will change with or without it. Careful consideration needs to be given to what conditions will exist without the project as well as with it, so that it will be compared with the appropriate benchmark. In analyzing cash flows with the project, users must consider the interaction of the project with remaining operations to appropriately capture all of the costs and benefits. Sufficient projections should be made for start up cost, including new training, and computer costs. Without planning for these items in advance, there may be a tendency to scrimp on them as a result, later net cash flows will not be as positive as planned because the project is not running efficiently.
3. Describe the following:
A) Project Status Reporting
Ans:- The report should have a standard format. Every status report for each reporting period should be consistent with the previous report. They should be written in formal business language since others besides your manager may read it. There are many project management tools on the market that can help take the complexity out of project reporting.
The reports should be short and focused. This can be accomplished by, keeping a running list of notes or achievements throughout the project, that way it's easier to write the report when the time comes. You're proud of you and your team's accomplishments, so let everyone who reads your reports know it. Don't do you and your team a disservice by turning in a jumble of words and sentences that is bound to end up in the waste basket.
To start out with, you need to have the date, the name of the project, and the name of the project manager or team preferably at the top of the report. Make sure the items in the status report cover the correct time period that is specified. You should also include a short summary of the project on which you are working. That way there's no confusion as to which project is being referenced.
A list of the team's accomplishments during the reporting period should also be in included. Think of the status report as a way of self-promotion and team promotion. If a team member did a great job, acknowledge them in the report. Don't, however lie on a status report. If you encountered difficulties, highlight them and propose necessary solutions. If you overcame the problems, show how you resolved the issue.
Try to keep your report on the positive side. This isn't the place to whine about things. Use action words like completed, improved, fixed and corrected.
The next section should be all about what you plan on doing during the next reporting period. It can include tasks that were started and haven't yet been finished or projects on which you are getting ready to start. You should mention proposed projects or timeframes, and what has changed in the project plan and/or budget. This is also a good area to discuss any potential or current problems and whether or not assistance is needed from management.
You should always end your status report on an upbeat note. You have confidence in your team; tell your managers how much. Offer to discuss personally any items the manager may not understand or on which he or she wants more information.
You should keep copies of your status reports for reference. Also, if you track similar things from reporting period to reporting period, keep a chart or Excel document of these items that you can update as needed.
Management really depends on these reports. They are a way of communicating both progress and difficulties. Don't just send a half-written, uninformative report. Make your status report stand out. If you follow these simple instructions, you will not only make your manager happy, but also promote the accomplishments of your team and yourself.
B) Project Metrics
Ans:- There is a way to provide a project management status report. As an analogy, think about the criteria that a doctor might use to monitor health. He or she checks pulse, temperature, blood pressure etc. Similarly, a project needs to measure a mix of criteria.
If you distill the ground covered in methodologies and literature on Project Management, there are six criteria that constantly emerge. They are:
• Time (How are we going against schedule)
• Cost (How are we going against budget)
• Resources (How much time are we spending on the project)
• Scope (Is the scope creep in line with expectations)
• Quality (Are we reviewing and fixing quality problems)
• Actions (Do we have action items outstanding)
By looking at the performance against these six criteria as a project dashboard, a view of the parts of the project that are OK and the parts that are not OK can be formed. Before you start you need to set the rules. For example, you might decide that as far as cost is concerned, you have a budget of $100k over 10 months and will spend $10k per month. After 6 months, you should have expenditure of $60k.. Given that it is not an exact science, you might decide if you are under 105% of budget to that point, the traffic light should still be green. If between 105% and 115% it is yellow and over 115% it is red.
You can set these dashboard metrics before you start the project (or revise them at any time during the project) to manage the colour changes in the lights. This is not to say they should be managed to always stay green!!! The intention is to agree guidelines up front in conjunction with the Sponsor, and manage to those. In that way everyone with an involvement in the project can see the status and feel comfortable that what they see reflects the progress against previously agreed parameters.
Project Time Line
The most common tool for managing a project is the schedule. Are we on time? At any point in a large project, there will probably be one or two tasks behind schedule and an equal number ahead of schedule. By setting parameters as to the number over schedule for the traffic lights to change, you can present the performance against schedule as a set of lights.
For example, it might be agreed with the Sponsor before the project begins that if the number of tasks overdue is greater than 2, the light turns yellow. If over 5 the light turns red.
Project Cost Management
It is not sensible to monitor budget in total. If the budget were spent half way through a project, we would suddenly be in trouble with no warning that the problem was occurring. For this reason, we need to create a project cash flow for the budget. Typically this is our estimate month by month of expenditure.
By calculating the expected expenditure versus actual expenditure at any point, we can calculate how we are performing against our budget. This requires some special handling if accrued costs are involved.
Project Resources
Just as we have a cash flow for money, we can do a project human resource projection. To do this we need to estimate how many man-days per period will be used on the project. By comparing that to timesheets, we can work out if we are spending more or less time on the project than estimated.
This technique does not measure trade off's regarding the quality of the resources. The quality will be largely covered by the budget. If less skilled resources are allocated to the project, the cost will be lower and consequently the expenditure against budget will be less. It may however require longer to complete the work, or more resources may be needed.
If higher skilled resources are used, the budget may be exceeded but the work completed in a shorter time. It may even be a sensible decision to use more, lower skilled resources to achieve the same objectives. The decision is usually driven by the availability, cost and time to complete the task. This is all about project human resource management.
Parameters can be set so that if the time spent is on schedule, the light is green. Up to x% over it is yellow, and over x% red. If there is a lag between timesheets being completed and able to be accrued, it may be beneficial to set limits for the lights under 100%.
C) Earned Value Analysis (EVA)
Ans:- Earned Value Analysis (EVA) is an industry standard method of measuring a project's progress at any given point in time, forecasting its completion date and final cost, and analyzing variances in the schedule and budget as the project proceeds. It compares the planned amount of work with what has actually been completed, to determine if the cost, schedule, and work accomplished are progressing in accordance with the plan. As work is completed, it is considered "earned". The Office of Management & Budget prescribed that EVA is required on construction projects in Circular A-11, Part 7:
"Agencies must use a performance-based acquisition management system, based on ANSI/EIA Standard 748, to measure achievement of the cost, schedule and performance goals."
EVA is a snapshot in time, which can be used as a management tool as an early warning system to detect deficient or endangered progress. It ensures a clear definition of work prior to beginning that work. It provides an objective measure of accomplishments, and an early and accurate picture of the contract status. It can be as simple as tracking an elemental cost estimate breakdown as a design progresses from concept through to 100% construction documents, or it can be calculated and tracked using a series of mathematical formulae (see below). In either case, it provides a basis for course correction
Earned value analysis is always specific to a status date you choose. You may select the current date, a date in the past, or a date in the future. Most of the time, you'll set the status date to the date you last updated project progress. For example, if the current day is Tuesday, 9/12, but the project was last updated with progress on Friday, 9/8, you'd set the status date to Friday, 9/8.
Here is one example of how to analyze project performance with earned value analysis. Let's say a task has a budgeted cost (BCWS) of $100, and by the status date it is 40 percent complete. The earned value (BCWP) is $40, but the scheduled value (BCWS) at the status date is $50. This tells you that the task is behind schedule—less value has been earned than was planned. Let's also say that the task's actual cost (ACWP) at the status date is $60, perhaps because a more expensive resource was assigned to the task. This tells you that the task is also over budget—more cost has been incurred than was planned. You can see how powerful such an analysis can be. The earlier in a project's life cycle you identify such discrepancies between ACWP, BCWP and BCWS, the sooner you can take steps to remedy the problem.
One common way of visualizing the key values of earned value analysis is to use a chart. Start with a simple chart showing a steady accumulation of cost over the lifetime of a project:

The vertical y-axis shows the projected cumulative cost for a project.
The horizontal x-axis shows time.
The planned budget for this project shows a steady expenditure over the lifetime of the project. This line represents the cumulative baseline cost.
After work on the project has begun, a chart of the key values of earned value analysis may look like this:

The status date determines the values Project calculates.
The actual cost (ACWP) of this project has exceeded the budgeted cost.
The earned value (BCWP) reflects the true value of the work performed. In this case, the value of the work performed is less than the amount spent to perform that work
4. Describe the following with respect to integration testing:

A) Big Bang
Ans:- The Big Bang is the cosmological model of the initial conditions and subsequent development of the Universe that is supported by the most comprehensive and accurate explanations from current scientific evidence and observation.[1][2] As used by cosmologists, the term Big Bang generally refers to the idea that the Universe has expanded from a primordial hot and dense initial condition at some finite time in the past (best available measurements in 2009 suggest that the initial conditions occurred around 13.3 to 13.9 billion years ago[3][4]), and continues to expand to this day.
Georges Lemaître proposed what became known as the Big Bang theory of the origin of the Universe, although he called it his "hypothesis of the primeval atom". The framework for the model relies on Albert Einstein's general relativity and on simplifying assumptions (such as homogeneity and isotropy of space). The governing equations had been formulated by Alexander Friedmann. After Edwin Hubble discovered in 1929 that the distances to far away galaxies were generally proportional to their redshifts, as suggested by Lemaître in 1927, this observation was taken to indicate that all very distant galaxies and clusters have an apparent velocity directly away from our vantage point: the farther away, the higher the apparent velocity.[5] If the distance between galaxy clusters is increasing today, everything must have been closer together in the past. This idea has been considered in detail back in time to extreme densities and temperatures,[6][7][8] and large particle accelerators have been built to experiment on and test such conditions, resulting in significant confirmation of the theory, but these accelerators have limited capabilities to probe into such high energy regimes. Without any evidence associated with the earliest instant of the expansion, the Big Bang theory cannot and does not provide any explanation for such an initial condition; rather, it describes and explains the general evolution of the Universe since that instant. The observed abundances of the light elements throughout the cosmos closely match the calculated predictions for the formation of these elements from nuclear processes in the rapidly expanding and cooling first minutes of the Universe, as logically and quantitatively detailed according to Big Bang nucleosynthesis.
B) Bottom-up Integration
Ans:- In bottom up integration testing, module at the lowest level are developed first and other modules which go towards the 'main' program are integrated and tested one at a time. Bottom up integration also uses test drivers to drive and pass appropriate data to the lower level modules. As and when code for other module gets ready, these drivers are replaced with the actual module. In this approach, lower level modules are tested extensively thus make sure that highest used module is tested properly.
• Behavior of the interaction points are crystal clear, as components are added in the controlled manner and tested repetitively.
• Appropriate for applications where bottom up design methodology is used.
• Writing and maintaining test drivers or harness is difficult than writing stubs.
• This approach is not suitable for the software development using top down approach.
Bottom-up integration testing starts at the atomic modules level. Atomic modules are the lowest levels in the program structure. Since modules are integrated from the bottom up, processing required for modules that are subordinate to a given level is always available, so stubs are not required in this approach.
A bottom-up integration implemented with the following steps:
• Low-level modules are combined into clusters that perform a specific software subfunction. These clusters are sometimes called builds.
• A driver (a control program for testing) is written to coordinate test case input and output.
• The build is tested.
• Drivers are removed and clusters are combined moving upward in the program structure.

shows the how the bottom up integration is done. Whenever a new module is added to as a part of integration testing, the program structure changes. There may be new data flow paths, some new I/O or some new control logic. These changes may cause problems with functions in the tested modules, which were working fine previously.
To detect these errors regression testing is done. Regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects in the programs. Regression testing is the activity that helps to ensure that changes (due to testing or for other reason) do not introduce undesirable behavior or additional errors.

C) Top-down Integration
Ans:- Top down integration testing is an incremental integration testing technique which begins by testing the top level module and and progressively adds in lower level module one by one. Lower level modules are normally simulated by stubs which mimic functionality of lower level modules. As you add lower level code, you will replace stubs with the actual components. Top Down integration can be performed and tested in breadth first or depth firs manner.
• Driver do not have to be written when top down testing is used.
• It provides early working module of the program and so design defects can be found and corrected early.
• Stubs have to be written with utmost care as they will simulate setting of output parameters.
• It is difficult to have other people or third parties to perform this testing, mostly developers will have to spend time on this.
Integration testing is a logical extension of unit testing. In its simplest form, two units that have already been tested are combined into a component and the interface between them is tested. A component, in this sense, refers to an integrated aggregate of more than one unit. In a realistic scenario, many units are combined into components, which are in turn aggregated into even larger parts of the program. The idea is to test combinations of pieces and eventually expand the process to test your modules with those of other groups. Eventually all the modules making up a process are tested together. Beyond that, if the program is composed of more than one process, they should be tested in pairs rather than all at once.
Integration testing identifies problems that occur when units are combined. By using a test plan that requires you to test each unit and ensure the viability of each before combining units, you know that any errors discovered when combining units are likely related to the interface between units. This method reduces the number of possibilities to a far simpler level of analysis.
You can do integration testing in a variety of ways but the following are three common strategies:
• The top-down approach to integration testing requires the highest-level modules be test and integrated first. This allows high-level logic and data flow to be tested early in the process and it tends to minimize the need for drivers. However, the need for stubs complicates test management and low-level utilities are tested relatively late in the development cycle. Another disadvantage of top-down integration testing is its poor support for early release of limited functionality.

MC0084-02 – Software Project Management & Quality Assurance

1. Describe the following concepts with the help of relevant examples:
A) Aids for Risk Identification
Ans:- The process of risk management is generally divided into three phases: risk identification, risk analysis and risk response. From these three phases, the concept of risk identification appears to be the most known and practiced. The development of a more formalized, yet easy to use risk identification method is of extreme importance. This research focuses on risk identification as it relates to the Egyptian business environment including its procurement laws and regulations. Egypt is a developing country where the business environment emphasizes certain risks while diminishing others when compared with other countries.

This study tackles risk identification by investigating the most significant risks related to construction contracts for two power station projects in Egypt. These had typical characteristics such as large scale, fast track projects where a multi-package contracting plan was utilized. Ultimately, a checklist of risk categories was developed to aid contractors in their risk identification effort. The compilation of this checklist identified seven risk categories: (1) owner's obligations; (2) interface with other contractors; (3) liability risks; (4) financial risks; (5) risks related to changes; (6) technical risks; and (7) consortium risks.
Risk based audit (RBA) approaches represent a major trend in current audit methodology. The approach is based on risk analysis used to identify business strategy risk. The RBA has created a new set of research issues that need investigation. In particular, this approach has important implications for risk identification and risk assessment. The success of the RBA approach is contingent on understanding what factors improve or interfere with the accuracy of these risk judgments. I examine how budget constraints and decision aid use affect risk identification and risk assessment. Unlike previous budget pressure studies, I cast budget constraints as a positive influence on auditors. I expect more stringent budget constraints to be motivating to the auditor as they provide a goal for the auditor to achieve. I also expect budget constraints to induce feelings of pressure leading to the use of time-pressure adaptation strategies. When auditors have use of a decision aid, they take advantage of these motivational goals and/or use beneficial adaptive strategies. Overall, I find that auditor participants tend to be more accurate when identifying financial statement risks compared to business risks. Budget constraints have no effect on risk identification for financial or business risks; they also have no effect on financial risk assessments. On the other hand, business risk assessments are improved by implementing more stringent budget constraints, but only when a decision aid is also provided. Budget constraints can affect performance through a goal theory route or a time-pressure adaptation route. I investigate the paths through which budget constraints improve business risk assessments under decision aid use. I find that budget constraints directly affect performance, supporting a goal theory route. However, I do not find that budget constraints are mediated by perceived budget pressure as expected. Auditors appear to use a positive adaptive strategy to respond to perceived budget pressure, however perceived budget pressure is not induced by providing a more stringent budget.
This two day course builds on background theory to provide a practical skill building approach to work around risk. It includes offender profiles, domestic abuse dynamics, risk indicators and assessment, case and risk managemnet, lessons from homicide and serious case reviews, and co-ordinated community responses.
Identification and documentation of risk factors is a crucial step for auditing firms in managing the risk of loss from engagements (e.g., Public Oversight Board 2000). Once risk factors are identified and documented, auditors can direct evidence-gathering procedures to address potential sources of misstatement (AICPA 1983, 1988, 1997). In addition to facilitating audit planning and pricing (e.g., Houston et al. 1999; Johnstone 2000), documentation of client risk factors also provides an important means of communication between audit team members, enabling them to focus on key issues. Thus, risk identification at the planning stage is important to audit effectiveness and efficiency. The risk factors found at this stage become the context in which auditors view evidence gained subsequently in the course of the engagement.
Despite the importance of risk identification in practice, this task has received little attention in the auditing literature. The purpose of this paper is to examine whether auditors' identification of client risk factors and audit test planning decisions are affected by the design of decision aids utilized during this phase of the audit. Behavioral studies of audit risk (e.g., Colbert 1988; Kaplan and Reckers 1989; Walo 1995; Houston et al. 1999) typically use hypothetical case scenarios containing client facts presented by researchers to participants. These studies show that auditors' risk assessments and/or planning decisions are responsive to identified risk factors. However, they bypass the difficult step of risk identification, in which an auditor must draw specific facts from a large knowledge base of client and industry data acquired in the field.
Because risk factors are client characteristics that indicate a higher likelihood of misstatements and/or auditor losses from the engagement (e.g., from litigation), they constitute "negative" information. Negative client information may be difficult for auditors to detect since even relatively risky clients have many positive characteristics (Bedard and Graham 1994). Psychological theory on the relative use of negative and positive information (e.g., schema theory; Alba and Hasher 1983) proposes that the expectations people bring to a task generally influence how information is used: more negative expectations lead to relatively more use of negative information. Much of the research on this topic (for a review and meta-analysis, see Stangor and McMillan [1992]) concentrates on two important factors that contribute to the formation of negative or positive expectations: problem formulation and prior experience. A number of studies have found that negatively oriented problems (i.e., emphasizing risk and possible associated losses) tend to increase attention to negative information. According to Dunegan (1993), this effect occurs because decision makers apply a more effortful and through analysis of information in such settings.
Another important factor associated with expectation formation is prior experience. Increased exposure to a client with elevated engagement risk should lead to a more negative expectation regarding the client's situation, with an accompanying increase in decision process effort and more attention to negative evidence (e.g., Wofford and Goodwin 1990). Because both problem formulation and prior experience appear to affect expectations by increasing effort and directing attention, it is likely that the most thorough analysis of negative information will occur when the task is negatively oriented and prior experience indicates high risk (e.g., Mittal and Ross 1998). Based on these studies, we expect that auditors' risk factor identification will increase when task design and prior experience combine to create a strong negative expectation, i.e., when auditors use a negatively oriented decision aid for riskier clients.
While psychological research supports this hypothesis, prior research in auditing does not generally show effects of task orientation on the use of negative information, regardless of risk context (e.g., Kida 1984; Trotman and Sng 1989; Smith and Kida 1991). We build upon these studies by examining whether a decision aid that emphasizes the presence and consequences of client risks (termed a "negative" orientation) will result in better identification of risk factors than one with less risk emphasis (a "positive" orientation). In so doing, we apply a novel approach that enhances external validity, and therefore may be more likely to reveal differences in risk identification. Specifically, we examine auditors' identification of risk factors for their own clients, while manipulating decision aid orientation and measuring underlying client risk. To accomplish the orientation manipulation, we ask pairs of auditors to identify risk factors for a pre-identified common client. Members of each pair were given a decision aid with a negative or positive orientation, and asked to use that decision aid to identify risk factors, assess risk levels, and plan audit tests. Four specific risk areas are included in the decision aid: the client's accounting function, fraud, EDP security, and management information quality.
The results confirm that auditors using the negatively oriented risk identification decision aid document more risk factors than those using the positively oriented decision aid, but only for their higher-risk clients. The improvement in risk identification is seen in all four of the specific risk areas studied. This finding is important because audit interventions such as decision aids are of greatest potential value in preventing losses from ineffective risk identification for clients of higher risk. Results also show that greater risk factor identification is associated with planning of more substantive (and total) audit tests. Finally, we find that auditors with prior engagement experience with the client identify more risk factors, highlighting the importance of the accumulation of risk knowledge through repeated client experience.
In the following section, we briefly describe the risk identification task and discuss psychological theory underlying our hypothesis. We also review prior related research on audit planning and decision aid design in the auditing context. We then outline methods of the study, present the results, and conclude with a discussion of the implications and limitations of our findings.
The auditor's overall engagement risk includes both client inherent/control risk and client business risk (Houston et al. 1999; Johnstone 2000; Public Oversight Board 2000). Inherent/control risk is the risk of account misstatements, which if undetected could lead to audit failure. Client business risk is the risk that the client may experience declining performance or business failure. Higher client business risk may increase pressure on management to misstate financial results and result in decreased resources devoted to accounting for transactions. Thus, client business risk increases the propensity for misstatements from fraud and/or error and also increases the auditor's business risk, e.g., as proxied by engagement profitability and potential litigation (AICPA 1983, AU 312.02). These distinct but related risks result from a complex array of client risk factors and industry information, including factors specific to the client such as weaknesses of control systems, personal characteristics of client managers, competitive pressures in the marketplace, etc. For auditors, the task of risk identification involves bringing together these factors from memory and recorded data, and documenting those that should be considered in planning the current engagement. Risk assessment involves combining and weighting these factors to form risk judgments. Decision aids used in practice to support audit planning usually require auditors to respond to a decision aid encompassing specific risk areas and to formulate assessments of risk relative to the areas, cycles, and/or assertions.
Audit firms use decision aids at all phases of the engagement, to provide guidance to engagement teams and promote consistency of decision making across engagements and over time. Several studies in auditing examine how appropriately designed decision aids and procedural guidance can improve audit effectiveness and efficiency. For instance, Eining et al. (1997) show that design features affect the extent of decision aid use and outcome quality in fraud risk assessment. Also, McDaniel and Kinney (1995) find that procedural guidance improves audit effectiveness in performing analytical procedures. Such research is important in providing theory-based evidence of ways in which audit firms can enhance performance of their personnel. The current study builds on the theory outlined in the following section, to investigate how decision aid design may impact the identification of risk factors.
Task Expectations and Information Use
Audit risk identification is primarily a search for negative (i.e., risk-increasing) client information. The specific focus of this study is to investigate whether decision aid design can enhance auditors' recall and management of negative client information during audit planning. The theory underlying this study is developed from a broad literature in psychology addressing factors that affect individuals' acquisition, recall, and/or use of negative and positive information about the object of a task. For instance, research based on schema theory (e.g., Alba and Hasher 1983) concludes that people generally follow confirmatory strategies, i.e., they tend to seek negative (positive) evidence in the presence of factors contributing to a negative (positive) expectation about the object or outcome of the task (see also, Snyder and Swann 1978; Klayman and Ha 1987). There are two main factors identified by prior research that can affect the expectations that people bring to a task: problem formulation and prior experience.
Problem formulation influences expectations by focusing attention on negative or positive aspects of the task, which can affect …
B) Risk Components and drivers
Ans:- When a firm makes an investment, in a new asset or a project, the return on that investment can be affected by several variables, most of which are not under the direct control of the firm. Some of the risk comes directly from the investment, a portion from competition, some from shifts in the industry, some from changes in exchange rates and some from macroeconomic factors. A portion of this risk, however, will be eliminated by the firm itself over the course of multiple investments and another portion by investors as they hold diversified portfolios.
The first source of risk is project-specific; an individual project may have higher or lower cashflows than expected, either because the firm misestimated the cashflows for that project or because of factors specific to that project. When firms take a large number of similar projects, it can be argued that much of this risk should be diversified away in the normal course of business. For instance, Disney, while considering making a new movie, exposes itself to estimation error - it may under or over estimate the cost and time of making the movie, and may also err in its estimates of revenues from both theatrical release and the sale of merchandise. Since Disney releases several movies a year, it can be argued that some or much of this risk should be diversifiable across movies produced during the course of the year. [1]
The second source of risk is competitive risk, whereby the earnings and cashflows on a project are affected (positively or negatively) by the actions of competitors. While a good project analysis will build in the expected reactions of competitors into estimates of profit margins and growth, the actual actions taken by competitors may differ from these expectations. In most cases, this component of risk will affect more than one project, and is therefore more difficult to diversify away in the normal course of business by the firm. Disney, for instance, in its analysis of revenues from its Disney retail store division may err in its assessments of the strength and strategies of competitors like Toys�R�Us and WalMart. While Disney cannot diversify away its competitive risk, stockholders in Disney can, if they are willing to hold stock in the competitors. [2]
The third source of risk is industry-specific risk �� those factors that impact the earnings and cashflows of a specific industry. There are three sources of industry-specific risk. The first is technology risk, which reflects the effects of technologies that change or evolve in ways different from those expected when a project was originally analyzed. The second source is legal risk, which reflects the effect of changing laws and regulations. The third is commodity risk, which reflects the effects of price changes in commodities and services that are used or produced disproportionately by a specific industry. Disney, for instance, in assessing the prospects of its broadcasting division (ABC) is likely to be exposed to all three risks; to technology risk, as the lines between television entertainment and the internet are increasing blurred by companies like Microsoft, to legal risk, as the laws governing broadcasting change and to commodity risk, as the costs of making new television programs change over time. A firm cannot diversify away its industry-specific risk without diversifying across industries, either with new projects or through acquisitions. Stockholders in the firm should be able to diversify away industry-specific risk by holding portfolios of stocks from different industries.
The fourth source of risk is international risk. A firm faces this type of risk when it generates revenues or has costs outside its domestic market. In such cases, the earnings and cashflows will be affected by unexpected exchange rate movements or by political developments. Disney, for instance, was clearly exposed to this risk with its 33% stake in EuroDisney, the theme park it developed outside Paris. Some of this risk may be diversified away by the firm in the normal course of business by investing in projects in different countries whose currencies may not all move in the same direction. Citibank and McDonalds, for instance, operate in many different countries and are much less exposed to international risk than was Wal-Mart in 1994, when its foreign operations were restricted primarily to Mexico. Companies can also reduce their exposure to the exchange rate component of this risk by borrowing in the local currency to fund projects; for instance, by borrowing money in pesos to invest in Mexico. Investors should be able to reduce their exposure to international risk by diversifying globally.
The final source of risk is market risk: macroeconomic factors that affect essentially all companies and all projects, to varying degrees. For example, changes in interest rates will affect the value of projects already taken and those yet to be taken both directly, through the discount rates, and indirectly, through the cashflows. Other factors that affect all investments include the term structure (the difference between short and long term rates), the risk preferences of investors (as investors become more risk averse, more risky investments will lose value), inflation, and economic growth. While expected values of all these variables enter into project analysis, unexpected changes in these variables will affect the values of these investments. Neither investors nor firms can diversify away this risk since all risky investments bear some exposure to this risk.
C) Risk Prioritization
Ans:- Risk Prioritization The objective of Risk Prioritization is to prioritize the identified risks for mitigation. Both qualitative and quantitative methods can be used to categorize the risks as to their relative severity and potential impact on the project. To effectively compare identified risks, and to provide a proactive perspective, the risk prioritization method should consider the following factors:

• the probability of the risk occurring,
• the consequence of the risk, and
• the cost and resources required to mitigate the risk.
The Risk Factor Product prioritization methodology consists of identifying project risks, assessing the probability of each risk's occurrence and the consequence of each risk's occurrence, and prioritization of the identified risks by calculating the Risk Factor (RF) Product for each risk, and mitigation of the highest risks to resolution.
Once the probability of failure (Pf) and consequence of failure (Cf) factors have been determined, they can be plotted on an isorisk contour chart to graphically portray their relative importance and impact on the project as demonstrated in the figure, below.
Note that the location of Risk Items on the Isorisk Contour Chart (shown above) provides insight as to the most cost effective manner by which they may be mitigated. Risk A is best mitigated by a strategy that reduces the criticality of the risks occurrence, while Risk B is best mitigated by a strategy that reduces the probability of the risks occurrence.
Whenever there's too much to do and not enough time to do it, we have to prioritize so that at least the most important things get done. In testing, there's never enough time or resources. In addition, the consequences of skipping something important are severe. So prioritization has received a lot of attention. The approach is called Risk Driven Testing, where Risk has very specific meaning. Take the pieces of your system, whatever you use - modules, functions, section of the requirements - and rate each piece on two variables, Impact and Likelihood.
2. Describe the following with respect to Software Quality Assurance:
A) Supportive activities involved in the Software Life Cycle
Ans:- A software development process is a structure imposed on the development of a software product. Synonyms include software life cycle and software process. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process.
The largely growing body of software development organizations implement process methodologies. Many of them are in the defense industry, which in the U.S. requires a rating based on 'process models' to obtain contracts.
The international standard for describing the method of selecting, implementing and monitoring the life cycle for software is ISO 12207.
A decades-long goal has been to find repeatable, predictable processes that improve productivity and quality. Some try to systematize or formalize the seemingly unruly task of writing software. Others apply project management techniques to writing software. Without project management, software projects can easily be delivered late or over budget. With large numbers of software projects not meeting their expectations in terms of functionality, cost, or delivery schedule, effective project management appears to be lacking.
Organizations may create a Software Engineering Process Group (SEPG), which is the focal point for process improvement. Composed of line practitioners who have varied skills, the group is at the center of the collaborative effort of everyone in the organization who is involved with software engineering process improvement.
Software development activities

The activities of the software development process represented in the waterfall model. There are several other models to represent this process.
The important task in creating a software product is extracting the requirements or requirements analysis. Customers typically have an abstract idea of what they want as an end result, but not what software should do. Incomplete, ambiguous, or even contradictory requirements are recognized by skilled and experienced software engineers at this point. Frequently demonstrating live code may help reduce the risk that the requirements are incorrect.
Once the general requirements are gleaned from the client, an analysis of the scope of the development should be determined and clearly stated. This is often called a scope document.
Certain functionality may be out of scope of the project as a function of cost or as a result of unclear requirements at the start of development. If the development is done externally, this document can be considered a legal document so that if there are ever disputes, any ambiguity of what was promised to the client can be clarified.
Implementation, testing and documenting
Implementation is the part of the process where software engineers actually program the code for the project.
Software testing is an integral and important part of the software development process. This part of the process ensures that defects are recognized as early as possible.
Documenting the internal design of software for the purpose of future maintenance and enhancement is done throughout development. This may also include the authoring of an API, be it external or internal.
Deployment and maintenance
Deployment starts after the code is appropriately tested, is approved for release and sold or otherwise distributed into a production environment.
Software Training and Support is important and a lot of developers fail to realize that. It would not matter how much time and planning a development team puts into creating software if nobody in an organization ends up using it. People are often resistant to change and avoid venturing into an unfamiliar area, so as a part of the deployment phase, it is very important to have training classes for new clients of your software.
Maintaining and enhancing software to cope with newly discovered problems or new requirements can take far more time than the initial development of the software. It may be necessary to add code that does not fit the original design to correct an unforeseen problem or it may be that a customer is requesting more functionality and code can be added to accommodate their requests. If the labour cost of the maintenance phase exceeds 25% of the prior-phases' labor cost, then it is likely that the overall quality, of at least one prior phase, is poor. In that case, management should consider the option of rebuilding the system (or portions) before maintenance cost is out of control.
Bug Tracking System tools are often deployed at this stage of the process to allow development teams to interface with customer/field teams testing the software to identify any real or perceived issues. These software tools, both open source and commercially licensed, provide a customizable process to acquire, review, acknowledge, and respond to reported issues.
B) Standards and Procedures
Internally developed technology standards establish measurable controls and requirements to achieve policy objectives. Technology standards benefit an institution by defining and narrowing the scope of options and enabling greater focus by the supporting IT resources.

Standardization of hardware, software, and the operating environment offers a number of benefits and greatly facilitates the implementation and maintenance of “enterprise architecture.” Standardization of hardware and software (including configurations and versions) simplifies the task of creating and maintaining an accurate survey and inventory of the technology environment. It can also improve IT operations performance, reduce IT cost (particularly in acquisition, development, training, and maintenance), allow the leveraging of resources, enhance reliability and predictability, contribute to improved interoperability and integration, reduce the time to market for projects that involve technology re-configuration, and alleviate complexity in technology risk management.

The degree to which an institution standardizes its hardware and software is a business decision. Management should weigh the benefits of standardization against the competing benefits offered by “best of breed” technology solutions. Management should also consider that certain applications will not function effectively on the “standard” platform, or that hardware will not function properly in a “standard” configuration. Institutions should adopt minimum technology standards to leverage purchasing power, ensure interoperability, provide for adequate information systems security, allow for timely recovery and restoration of critical systems, and ease the burden of maintenance and support.

Management should implement hardware, operating system, and application standardization through policies that address every platform from host to end user. A variety of automated systems and network management tools are available to monitor and enforce standards and promote version control in the mainframe, server, and desktop environments. Standardization is also enforced through the change management process and internal audits
Procedures describe the processes used to meet the requirements of the institution’s IT policies and standards. Management should develop written procedures for an institution’s critical operations. Procedures establish accountability and responsibility, provide specific controls for risk management policy guidance, define expectations for work processes and products, and serve as training tools. Because of the value procedures provide to these areas, management should update and review written procedures regularly. Updating written procedures is particularly important when processes, hardware, software, or configurations change. The scope of required procedures depends on the size and complexity of the institution’s IT operations and the variety of functions performed by IT operations. Examples of activities or functional areas where written procedures are appropriate include:

C) Software Quality Assurance Activities
Ans:- Quality assurance, or QA for short, refers to a program for the systematic monitoring and evaluation of the various aspects of a project, service, or facility to ensure that standards of quality are being met.
It is important to realize also that quality is determined by the program sponsor. QA cannot absolutely guarantee the production of quality products, unfortunately, but makes this more likely.
Two key principles characterise QA: "fit for purpose" (the product should be suitable for the intended purpose) and "right first time" (mistakes should be eliminated). QA includes regulation of the quality of raw materials, assemblies, products and components; services related to production; and management, production and inspection processes.
It is important to realize also that quality is determined by the intended users, clients or customers, not by society in general: it is not the same as 'expensive' or 'high quality'. Even goods with low prices can be considered quality items if they meet a market need
Early efforts to control the quality of production
Early civil engineering projects needed to be built from specifications, for example the four sides of the base of the Great Pyramid of Giza were required to be perpendicular to within 3.5 arcseconds.[citation needed]
During the Middle Ages, guilds adopted responsibility for quality control of their members, setting and maintaining certain standards for guild membership.
Royal governments purchasing material were interested in quality control as customers. For this reason, King John of England appointed William Wrotham to report about the construction and repair of ships. Centuries later, Samuel Pepys, Secretary to the British Admiralty, appointed multiple such overseers.
Prior to the extensive division of labor and mechanization resulting from the Industrial Revolution, it was possible for workers to control the quality of their own products. Working conditions then were arguably more conducive to professional pride.
The Industrial Revolution led to a system in which large groups of people performing a similar type of work were grouped together under the supervision of a foreman who was appointed to control the quality of work manufactured.
Wartime production
Around the time of World War I, manufacturing processes typically became more complex with larger numbers of workers being supervised. This period saw the widespread introduction of mass production and piecework, which created problems as workmen could now earn more money by the production of extra products, which in turn led to bad workmanship being passed on to the assembly lines.
To counter bad workmanship, full time inspectors were introduced into the factory to identify, quarantine and ideally correct product quality failures. Quality control by inspection in the 1920s and 1930s led to the growth of quality inspection functions, separately organised from production and big enough to be headed by superintendents.
The systematic approach to quality started in industrial manufacture during the 1930s, mostly in the USA, when some attention was given to the cost of scrap and rework. With the impact of mass production, which was required during the Second World War, it became necessary to introduce a more appropriate form of quality control which can be identified as Statistical Quality Control, or SQC. Some of the initial work for SQC is credited to Walter A. Shewhart of Bell Labs, starting with his famous one-page memorandum of 1924.
SQC came about with the realization that quality cannot be fully inspected into an important batch of items. By extending the inspection phase and making inspection organizations more efficient, it provides inspectors with control tools such as sampling and control charts, even where 100 per cent inspection is not practicable. Standard statistical techniques allow the producer to sample and test a certain proportion of the products for quality to achieve the desired level of confidence in the quality of the entire batch or production run.
Total quality management
Invariably, the Quality of output is directly dependent upon that of the participating constituents,[1] some of which are sustainably and effectively controlled while others are [not]. The fluid state spells lack of Quality control, and the process(es) which are properly managed for Quality such that Quality is assured, pertain to Total Quality Management.The major problem which leads to a decrease in sales was that the specifications did not include the most important factor, “What the specifications have to state in order to satisfy the customer requirements?”.
The major characteristics, ignored during the search to improve manufacture and overall business performance were:
• Reliability
• Maintainability
• Safety
• Strength
3. Explain the following Functional Specifications with suitable examples:
A) Black-Box Specification
Ans:- The term black box is a metaphor for a specific kind of abstraction. Black-box abstraction means, that none of the internal workings are visible, and that one can only observe output as reaction to some specific input (Fig. 1). Black-box testing, for instance, works this way. Test cases are selected without knowledge about the implementation. They are run, and the delivered results are checked for correctness.

Figure 1: Black-box component

For black-box specification of software components, pre- and postconditions are adequate. They describe the circumstances under which a component can be activated and the result of this activation.
Unfortunately, black-box specifications are insufficient for more interactive components. The result of an operation may depend not only on the input, but also on results of operations external to the specified component, which are called during execution
Figure 2: Black-box component, which makes an external call

The external operation, activated by the component, needs to be specified, too. This specification is not needed to see how the operation is to be applied, but to see how it needs to be implemented. It is a specification of duties, sometimes referred to as required interfaces [AOB97]. Often, such an external operation will depend on the state of the component calling it. As an example of this, in [Szy97] the Observer Pattern from [JV95] is analyzed. To perform its task, the observer needs to request state information from the observed object. If the observed object is a text and the observer a text view component, the observer needs to know whether it sees the old or the new version of the text. In other words, one needs to know, whether the observer is called before or after the data is changed. Specifying the text manipulation operations, such as delete, as black boxes does not reveal when the observer is called and what the state of the caller is at that time (Fig. 3). In this simple example, the intermediate state of the black-box at the time of the call could be described verbally, but in more involved cases this approach usually fails.
B) State-Box Specification
Ans:- In general terms, the functional specification states what the proposed system is to do, whereas design is how the system is to be constructed to meet the functional specification. However in writing it, some consideration of design issues must take place, to ensure a realistic system is specified.
The functional specification should be clear, consistent, precise and unambiguous. The user requirement may mean that the user interface should be included in this document for some projects, whereas for others this will be done at the design stage either within a document or developed via a prototype.
It is important that there is a draft functional specification before the design stage on any project is started and that the functional specification is agreed and issued normally within a week of the final quality review. There must be a milestone on the project plan for the issue of the functional specification. The functional specification must be kept up-to-date, as this is the communication with the world outside the development staff.
The following should be used as a standard for a functional specification with some mandatory sections. The layout itself is at the discretion of the author except for Chapter 1. The document should have a standard front page, document authorisation page containing the title, issue, author and quality controller and contents page. Use diagrams where appropriate.
Do not be afraid of examples! Use them copiously throughout, as a brief, concrete example often illustrates a point much more succinctly than a normative explanation. Also remember to keep the examples interesting, as this is a useful way of keeping the reader’s interest – this is just as important in a functional specification as in any other type of document.

C) Clear-Box Specification
Ans:- ClearCenter will soon launch ClearBOX - tested, certified and supported hardware with a ClearCenter ClearBOX brand. This gives partners and customers a single source for hardware, software and services with a single technical support contact. No finger pointing between vendors! We know this solution inside and out from encrypted hard drives to redundant and load-balanced internet connections. Today, thousands of ClearCenter partners run entire networks on just ClearOS and ClearSDN content services on a single hardware device. ClearCenter's ClearBOX 320 series will be the mid-line series of the new ClearBOX offering and next evolution of ClearOS with a variety of simple configurations, tailored to the needs of small IT environments. ClearBOX 320 will arrive in Q1 2010, stay tuned!is a fully pre-configured option for ClearBOX. Customers will be able to name their domain, emails, configuration and ClearSDN content subscriptions. Their ClearBOX device will arrive pre-configured and ready to connect! ClearSDN services will be provisioned before the ClearBOX arrives. After a ClearBOX is connected to the Internet, it will be live and active! ClearBOX devices can be deployed to hundreds of locations with simple plug and play instructions. This represents a dramatic step forward for plug and play networking, gateway, server deployment and management in small IT environments. ClearOS installs will be a snap!
In general terms, the functional specification states what the proposed system is to do, whereas design is how the system is to be constructed to meet the functional specification. However in writing it, some consideration of design issues must take place, to ensure a realistic system is specified.
The functional specification should be clear, consistent, precise and unambiguous. The user requirement may mean that the user interface should be included in this document for some projects, whereas for others this will be done at the design stage either within a document or developed via a prototype.
It is important that there is a draft functional specification before the design stage on any project is started and that the functional specification is agreed and issued normally within a week of the final quality review. There must be a milestone on the project plan for the issue of the functional specification. The functional specification must be kept up-to-date, as this is the communication with the world outside the development staff.
The following should be used as a standard for a functional specification with some mandatory sections. The layout itself is at the discretion of the author except for Chapter 1. The document should have a standard front page, document authorisation page containing the title, issue, author and quality controller and contents page. Use diagrams where appropriate.
Do not be afraid of examples! Use them copiously throughout, as a brief, concrete example often illustrates a point much more succinctly than a normative explanation. Also remember to keep the examples interesting, as this is a useful way of keeping the reader’s interest – this is just as important in a functional specification as in any other type of document.
This section should include an explanation of the system we are replacing, even if it’s an old manual system.
What problems does the current system have? Which of these problems do we solve?
What useful functions of the current system will we not provide (Constraints)?
Depending on the depth of analysis required, this section may also describe the root causes of each problem. “Root cause” analysis is a systematic way of uncovering the underlying cause of an identified problem:
“It’s amazing how much people do know about the problem behind the problem; it’s just that no-one – by which we usually mean management – had taken the time to ask them before. So, ask them and then ask them again.”
Source: Managing Software Requirements: A Unified Approach by Dean Leffingwell, Don Widrig – Chapter 4, “The Five Steps in Problem Analysis”

4. Explain the following:
A) Mathematics in Software Development
Ans:- Formal methods are mathematical approaches to solving software (and hardware) problems at the requirements, specification and design levels. Examples of formal methods include the B-Method, Petri nets, Automated theorem proving, RAISE and VDM. Various formal specification notations are available, such as the Z notation. More generally, automata theory can be used to build up and validate application behavior by designing a system of finite state machines.
Finite state machine (FSM) based methodologies allow executable software specification and by-passing of conventional coding (see virtual finite state machine or event driven finite state machine).
Formal methods are most likely to be applied in avionics software, particularly where the software is safety critical. Software safety assurance standards, such as DO178B demand formal methods at the highest level of categorization (Level A).
Formalization of software development is creeping in, in other places, with the application of Object Constraint Language (and specializations such as Java Modeling Language) and especially with Model-driven architecture allowing execution of designs, if not specifications.
Another emerging trend in software development is to write a specification in some form of logic (usually a variation of FOL), and then to directly execute the logic as though it were a program. The OWL language, based on Description Logic, is an example. There is also work on mapping some version of English (or another natural language) automatically to and from logic, and executing the logic directly. Examples are Attempto Controlled English, and Internet Business Logic, which does not seek to control the vocabulary or syntax. A feature of systems that support bidirectional English-logic mapping and direct execution of the logic is that they can be made to explain their results, in English, at the business or scientific level.
The Government Accountability Office, in a 2003 report on one of the Federal Aviation Administration’s air traffic control modernization programs,[2] recommends following the agency’s guidance for managing major acquisition systems by
• establishing, maintaining, and controlling an accurate, valid, and current performance measurement baseline, which would include negotiating all authorized, unpriced work within 3 months;
• conducting an integrated baseline review of any major contract modifications within 6 months; and
• preparing a rigorous life-cycle cost estimate, including a risk assessment, in accordance with the Acquisition System Toolset’s guidance and identifying the level of uncertainty inherent in the estimate.
Modern systems are critically dependent on software for their design and operation. The next generation of developers must be facile in the specification, design and implementation of dependable software using rigorous developmental processes. To help prepare this generation the authors have developed a teaching approach and materials that serve a two-fold purpose: promote an understanding and appreciation of the discrete Mathematical Structures (DM) that are the foundation of Software Engineering (SE) theory; and provide motivation and training of modern software development and analysis tools.
Software development is the act of working to produce/create software. This software could be produced for a variety of purposes - the three most common purposes are to meet specific needs of a specific client/business, to meet a perceived need of some set of potential users (the case with commercial and open source software), or for personal use (e.g. a scientist may write software to automate a mundane task).
The term software development is often used to refer to the activity of computer programming, which is the process of writing and maintaining the source code, whereas the broader sense of the term includes all that is involved between the conception of the desired software through to the final manifestation of the software. Therefore, software development may include research, new development, modification, reuse, re-engineering, maintenance, or any other activities that result in software products.[1] For larger software systems, usually developed by a team of people, some form of process is typically followed to guide the stages of production of the software.
Especially the first phase in the software development process may involve many departments, including marketing, engineering, research and development and general management
There are several different approaches to software development, much like the various views of political parties toward governing a country. Some take a more structured, engineering-based approach to developing business solutions, whereas others may take a more incremental approach, where software evolves as it is developed piece-by-piece. Most methodologies share some combination of the following stages of software development:
• Market research
• Gathering requirements for the proposed business solution
• Analyzing the problem
• Devising a plan or design for the software-based solution
• Implementation (coding) of the software
• Testing the software
• Development
• Maintenance and bug fixing
These stages are often referred to collectively as the software development lifecycle, or SDLC. Different approaches to software development may carry out these stages in different orders, or devote more or less time to different stages. The level of detail of the documentation produced at each stage of software development may also vary. These stages may also be carried out in turn (a “waterfall” based approach), or they may be repeated over various cycles or iterations (a more "extreme" approach). The more extreme approach usually involves less time spent on planning and documentation, and more time spent on coding and development of automated tests. More “extreme” approaches also promote continuous testing throughout the development lifecycle, as well as having a working (or bug-free) product at all times. More structured or “waterfall” based approaches attempt to assess the majority of risks and develop a detailed plan for the software before implementation (coding) begins, and avoid significant design changes and re-coding in later stages of the software development lifecycle.
There are significant advantages and disadvantages to the various methodologies, and the best approach to solving a problem using software will often depend on the type of problem. If the problem is well understood and a solution can be effectively planned out ahead of time, the more "waterfall" based approach may work the best. If, on the other hand, the problem is unique (at least to the development team) and the structure of the software solution cannot be easily envisioned, then a more "extreme" incremental approach may work best. A software development process is a structure imposed on the development of a software product. Synonyms include software life cycle and software process. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process.

B) Mathematical Preliminaries
Ans:- Introduction
To develop an intuitive understanding of abstract concepts it is often useful to have the same idea expressed from different viewpoints. Fourier analysis may be viewed from two distinctly different vantage points, one geometrical and the other analytical. Geometry has an immediate appeal to visual science students, perhaps for the same reasons that it appealed to the ancient Greek geometers. The graphical nature of lines, shapes, and curves makes geometry the most visual branch of mathematics, as well as the most tangible. On the other hand, geometrical intuition quickly leads to a condition which one student colorfully described as "mental constipation". For example, the idea of plotting a point given it's Cartesian (x,y) coordinates is simple enough to grasp, and can be generalized without too much protest to 3-dimensional space, but many students have great difficulty transcending the limits of the physical world in order to imagine plotting a point in 4-, 5-, or N-dimensional space. A similar difficulty must have been present in the minds of the ancient Greeks when contemplating the "method of exhaustion" solution to the area of a circle. The idea was to inscribe a regular polygon inside the circle and let the number of sides grow from 3 (a triangle) to 4 (a square) and so on without limit as suggested in the figure below.

These ancients understood how to figure the area of the polygon, but they were never convinced that the area of the polygon would ever exactly match that of the circle, regardless of how large N grew. This conceptual hurdle was so high that it stood as a barrier for 2,000 years before it was cleared by the great minds of the 17th century who invented the concept of limits which is fundamental to the Calculus (Boyer, 1949). My teaching experience suggests there are still a great many ancient Greeks in our midst, and they usually show their colors first in Fourier analysis when attempting to make the transition from discretely sampled functions to continuous functions.
The modern student may pose the question, "Why should I spend my time learning to do Fourier analysis when I can buy a program for my personal computer that will do it for me at the press of a key?" Indeed, this seems to be the prevailing attitude, for the instruction manual of one popular analysis program remarks that "Fourier analysis is one of those things that everybody does, but nobody understands." Such an attitude may be tolerated in some fields, but not in science. It is a cardinal rule that the experimentalist must understand the principles of operation of any tool used to collect, process, and analyze data. Accordingly, the main goal of this course is to provide students with an understanding of Fourier analysis - what it is, what it does, and why it is useful. As with any tool, one gains an understanding most readily by practicing its use and for this reason homework problems form an integral part of the course. On the other hand, this is not a course in computer programming and therefore we will not consider in any detail the elegant fast Fourier transform (FFT) algorithms which make modern computer programs so efficient.
Lastly, we study Fourier analysis because it is the natural tool for describing physical phenomena which are periodic in nature. Examples include the annual cycle of the solar seasons, the monthly cycle of lunar events, daily cycles of circadean rhythms, and other periodic events on time scales of hours, minutes, or seconds such as the swinging pendulum, vibrating strings, or electrical oscillators. The surprising fact is that a tool for describing periodic events can also be used to describe non-periodic events. This notion was a source of great debate in Fourier's time, but today is accepted as the main reason for the ubiquitous applicability of Fourier's analysis in modern science.
Scalar arithmetic.
One of the earliest mathematical ideas invented by man is the notion of magnitude. Determining magnitude by counting is evidently a very old concept as it is evident in records from ancient Babylon and Egypt. The idea of whole numbers, or integers, is inherent in counting and the ratio of integers was also used to represent simple fractions such as 1/2, 3/4, etc. Greek mathematicians associated magnitude with the lengths of lines or the area of surfaces and so developed methods of computation which went a step beyond mere counting. For example, addition or subtraction of magnitudes could be achieved by the use of a compass and straightedge as shown in Fig. 1.1.

If the length of line segment A represents one quantity to be added, and the length of line segment B represents the second quantity, then the sum A+B is determined mechanically by abutting the two line segments end-to-end. The algebraic equivalent would be to define the length of some suitable line segment as a "unit length". Then, with the aid of a compass, one counts the integer number of these unit lengths needed to mark off the entire length of segments A and B. The total count is thus the length of the combined segment A+B. This method for addition of scalar magnitudes is our first example of equivalent geometric and algebraic methods of solution to a problem.
Consider now the related problem of determining the ratio of two magnitudes. The obvious method would seem to be to use the "unit length" measuring stick to find the lengths of the two magnitudes and quote the ratio of these integer values as the answer to the problem. This presupposes, however, that a suitable unit of measure can always be found. One imagines that it must have been a crushing blow to the Greek mathematicians when they discovered that this requirement can not always be met. One glaring example emerges when attempting to determine the ratio of lengths for the edge (A) and diagonal (B) of a square as shown in Fig. 1.2. Line B is longer than A, but shorter than 2A. The Greeks dealt with this awkward situation by describing the lengths A and B as "incommensurate". They might just as well have used the words "irrational", "illogical", "false", or "fictitious".

Nowadays we are comfortable with the notion of "irrational" numbers as legitimate quantities which cannot be expressed as the ratio of two integers. Prominent examples are √2, π, and e. Nevertheless, there lurks in the pages ahead similar conceptual stumbling blocks, such as "negative frequency" and "imaginary numbers", which, although seemingly illogical and irrational at first, will hopefully become a trusted part of the student's toolbox through familiarity of use.
Vector arithmetic.
Some physical quantities have two or more attributes which need to be quantified. Common examples are velocity, which is speed in a particular direction, and force, which has both magnitude and direction. Such quantities are easily visualized geometrically (Fig. 1.3) as directed line segments called vectors. To create an algebraic representation of vector quantities, we can simply list the two scalar magnitude which comprise the vector, i.e. (speed, direction). This is the polar form of vector notation. An alternative representation is suggested by our physical experience that if one travels at the rate of 3m/s in a northerly direction and at the same time 4m/s in an easterly direction, then the net result is a velocity of 5m/s in a northeasterly direction. Thus, the vector magnitude may be specified by a list of the scalar magnitudes in two orthogonal directions. This Cartesian form is named after the great French mathematician René Descartes and is often described as a decomposition of the original vector into two mutually orthogonal components.

Consider now the problem of defining what is meant by the addition or subtraction of two vector quantities. Our physical and geometrical intuition suggests that the notion of addition is inherent in the Cartesian method of representing vectors. That is, it makes sense to think of the northeasterly velocity vector V as the sum of the easterly velocity vector X and the northerly velocity vector Y. How would this notion of summation work in the case of two arbitrary velocity vectors W and V which are not necessarily orthogonal? A simple method emerges if we first decompose each of these vectors into their orthogonal components, as shown in Fig. 1.4. Since an easterly velocity has zero component in the northerly direction, we may find the combined velocity in the easterly direction simply by adding together the X-components of the two vectors. Similarly, the two Y-components may be added together to determine the total velocity in the northerly direction. Thus we can build upon our intuitive notion of adding scalar magnitudes illustrated in Fig. 1.1 to make an intuitively satisfying definition of vector addition which is useful for summing such physical quantities as velocity, force, and, as we shall see shortly, sinusoidal waveforms.

The way to generalize the above ideas to represent 3-dimensional quantities should be clear enough. Although drawing 3- and 4-dimensional vectors on paper is a challenge, drawing higher dimensional vectors is impossible. On the other hand, extending the algebraic method to include a 3rd, 4th, or Nth dimension is as easy as adding another equation to the list and defining some new variables. Thus, although the geometrical method is more intuitive, for computational purposes the algebraic method quickly becomes the method of choice for solving problems.
In summary, we have found that by decomposing vector quantities into orthogonal components then simple rules emerge for combining vectors linearly (i.e. addition or subtraction) which produce answers which make sense when applied to physical problems. In Fourier analysis we follow precisely the same strategy to show how arbitrary curves may be decomposed into a sum of orthogonal functions, the sines and cosines. By representing curves in this way, simple rules will emerge for combining curves in different ways and for calculating the outcome of physical events.