Learning Ashram

Object Oriented Analysis & Design using UML

-01 –
Object Oriented Analysis & Design using UML

1. Describe the theory behind Object oriented systems development methodology.
Ans;- Existing approaches to object-oriented system development are poorly integrated in several ways. This inadequate integration is ubiquitous and causes numerous inefficiencies in the object-oriented development process. These problems can be addressed by abandoning typical object-oriented models in favor of a single, seamless system model. By using a seamless model, such as the one we propose, not only do we overcome the integration inefficiences to which we allude, but we also raise the level of abstraction for object-oriented system implementation and enable same-paradigm system evolution.
Software development approaches
Every software development methodology has more or less its own approach to software development. There is a set of more general approaches, which are developed into several specific methodologies. These approaches are:[1]
• Waterfall: linear framework type.
• Prototyping: iterative framework type
• Incremental: combination of linear and iterative framework type
• Spiral: combination linear and iterative framework type
• Rapid Application Development (RAD): Iterative Framework Type
Waterfall model
The waterfall model is a sequential development process, in which development is seen as flowing steadily downwards (like a waterfall) through the phases of requirements analysis, design, implementation, testing (validation), integration, and maintenance. The first formal description of the waterfall model is often cited to be an article Basic principles of the waterfall model are:
• Project is divided into sequential phases, with some overlap and splashback acceptable between phases.
• Emphasis is on planning, time schedules, target dates, budgets and implementation of an entire system at one time.
• Tight control is maintained over the life of the project through the use of extensive written documentation, as well as through formal reviews and approval/signoff by the user and information technology management occurring at the end of most phases before beginning the next phase.
Software prototyping, is the framework of activities during software development of creating prototypes, i.e., incomplete versions of the software program being developed.
• Basic principles of prototyping are:[1]
• Not a standalone, complete development methodology, but rather an approach to handling selected portions of a larger, more traditional development methodology (i.e. Incremental, Spiral, or Rapid Application Development (RAD)).
• Attempts to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process.
• User is involved throughout the process, which increases the likelihood of user acceptance of the final implementation.
• Small-scale mock-ups of the system are developed following an iterative modification process until the prototype evolves to meet the users’ requirements.
• While most prototypes are developed with the expectation that they will be discarded, it is possible in some cases to evolve from prototype to working system.
• A basic understanding of the fundamental business problem is necessary to avoid solving the wrong problem.
Various methods are acceptable for combining linear and iterative systems development methodologies, with the primary objective of each being to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process.
Basic principles of incremental development are:[1]
• A series of mini-Waterfalls are performed, where all phases of the Waterfall development model are completed for a small part of the systems, before proceeding to the next incremental, or
• Overall requirements are defined before proceeding to evolutionary, mini-Waterfall development of individual increments of the system, or
• The initial software concept, requirements analysis, and design of architecture and system core are defined using the Waterfall approach, followed by iterative Prototyping, which culminates in installation of the final prototype (i.e., working system).

The spiral model.
The spiral model is a software development process combining elements of both design and prototyping-in-stages, in an effort to combine advantages of top-down and bottom-up concepts. Basic principles:
• Focus is on risk assessment and on minimizing project risk by breaking a project into smaller segments and providing more ease-of-change during the development process, as well as providing the opportunity to evaluate risks and weigh consideration of project continuation throughout the life cycle.
• "Each cycle involves a progression through the same sequence of steps, for each portion of the product and for each of its levels of elaboration, from an overall concept-of-operation document down to the coding of each individual program."[3]
• Each trip around the spiral traverses four basic quadarants: (1) determine objectives, alternatives, and constrainst of the iteration; (2) Evaluate alternatives; Identify and resolve risks; (3) develop and verify deliverables from the iteration; and (4) plan the next iteration.[4]
• Begin each cycle with an identification of stakeholders and their win conditions, and end each cycle with review and commitment.
Rapid Application Development (RAD)
Rapid Application Development (RAD) is a software development methodology, which involves iterative development and the construction of prototypes. Rapid application development is a term originally used to describe a software development process introduced by James Martin in 1991.
• Basic principles:[1]
• Key objective is for fast development and delivery of a high quality system at a relatively low investment cost.
• Attempts to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process.
• Aims to produce high quality systems quickly, primarily through the use of iterative Prototyping (at any stage of development), active user involvement, and computerized development tools. These tools may include Graphical User Interface (GUI) builders, Computer Aided Software Engineering (CASE) tools, Database Management Systems (DBMS), fourth-generation programming languages, code generators, and object-oriented techniques.
• Key emphasis is on fulfilling the business need, while technological or engineering excellence is of lesser importance.
• Project control involves prioritizing development and defining delivery deadlines or “timeboxes”. If the project starts to slip, emphasis is on reducing requirements to fit the timebox, not in increasing the deadline.
• Generally includes Joint Application Development (JAD), where users are intensely involved in system design, either through consensus building in structured workshops, or through electronically facilitated interaction.
• Active user involvement is imperative.
• Iteratively produces production software, as opposed to a throwaway prototype.
• Produces documentation necessary to facilitate future development and maintenance.
• Standard systems analysis and design techniques can be fitted into this framework.

2. Describe the following with suitable examples:
A) Object and Identity
Ans:- An identity in object-oriented programming, object-oriented design and object-oriented analysis describes the property of objects that distinguishes them from other objects. This is closely related to the philosophical concept of identity A reference can be used to refer to an object with a specific identity. A reference contains the information that is necessary for the identity property to be realized in the programming language, and allows access to the object with the identity. A type of a target of a reference is a role.
Object identity is often not useful, if referential transparency is assumed, because identity is a property that an object may contain, aspects that are not visible in its interface. Thus, objects need to be identified in the interface with a mechanism that is distinct from the methods used to access the object's state in its interface. With referential transparency, the value of the state of the object would be identical or isomorphic with the values accessible from the object's interface. There would be no difference between the object's interface and its implementation, and the identity property would provide no useful additional information.
Identity and location
Identity is closely associated with the locations that objects are stored in. Some programming languages implement object identity by using the location of the object in computer memory as the mechanism for realizing object identity. However, objects can move from one place to another without change to their identity, and can be stored in places other than computer memory, so this does not fully characterize object identity.
This distinction is well illustrated in the case of live distributed objects, defined as instances of distributed multi-party protocols viewed from the object-oriented perspective. A distributed protocol instance, such as a multicast group or a publish-subscribe topic, does not exist in a single location; it typically spans across a set of machines distributed over the network, which can change as the machines join and leave the protocol instance. Nevertheless, it can still be viewed as a single object[1], as it has a well-defined identity, state and externally-visible behavior. In this case, the identity is determined by the same factors that distinguish between different protocol instances; these might include the description of a peer-to-peer protocol running between object replicas, including all essential parameters, such as the identifier of the multicast group or a publish-subscribe channel, or the identity of the services that maintains the membership. Thus, while identity and location are closely connected concepts, they are, in fact, not the same.
Consequences of identity
Identity of objects allows objects to be treated as black boxes. The object need not expose its internal structure. It can still be referred to, and its other properties can be accessed via its external behaviour associated with the identity. The identity provides a mechanism for referring to such parts of the object that are not exposed in the interface. Thus, identity is the basis for polymorphism in object-oriented programming.
Identity allows comparison of references. Two references can be compared whether they are equal or not. Due to the identity property, this comparison has special properties. If the comparison of references indicates that the references are equal, then it's clear that the two objects pointed by the references are the same object. If the references do not compare equal, then it's not necessarily guaranteed that the identity of the objects behind those references is different. The object identity of two objects of the same type is the same, if every change to either object is also a change to the other object.

B) Static and dynamic binding
Ans:- Static binding- If the compiler can resolve the binding at the compile time only then such a binding is called Static Binding or Early Binding. All the instance method calls are always resolved at runtime, but all the static method calls are resolved at compile time itself and hence we have static binding for static method calls. Because static methods are class methods and hence they can be accessed using the class name itself (in fact they are encourgaed to be used using their corresponding class names only and not by using the object references) and therefore access to them is required to be resolved during compile time only using the compile time type information. That's the reason why static methods can not actually be overriden. Read more - Can you override static methods in Java?
Similarly, access to all the member variables in Java follows static binding as Java doesn't support (in fact, it discourages) polymorphic behavior of member variables. For example:-
class SuperClass{
public String someVariable = "Some Variable in SuperClass";
class SubClass extends SuperClass{
...public String someVariable = "Some Variable in SubClass";...
SuperClass superClass1 = new SuperClass();
SuperClass superClass2 = new SubClass();
Some Variable in SuperClass
Some Variable in SuperClass
We can observe that in both the cases, the member variable is resolved based on the declared type of the object reference only, which the compiler is capable of finding as early as at the compile time only and hence a static binding in this case. Another example of static binding is that of 'private' methods as they are never inherited and the compile can resolve calls to any private method at compile time only.
Update[June 24, 2008]: Read more to understand how field hiding in Java works? How is it different from static method hiding? - Field Hiding in Java >>
Liked the article? You may like to Subscribe to this blog for regular updates. You may also like to follow the blog to manage the bookmark easily and to tell the world that you enjoy GeekExplains. You can find the 'Followers' widget in the rightmost sidebar.
dynamic binding- Dynamic Binding refers to the case where compiler is not able to resolve the call and the binding is done at runtime only. Let's try to understand this. Suppose we have a class named 'SuperClass' and another class named 'SubClass' extends it. Now a 'SuperClass' reference can be assigned to an object of the type 'SubClass' as well. If we have a method (say 'someMethod()') in the 'SuperClass' which we override in the 'SubClass' then a call of that method on a 'SuperClass' reference can only be resolved at runtime as the compiler can't be sure of what type of object this reference would be pointing to at runtime.

SuperClass superClass1 = new SuperClass();
SuperClass superClass2 = new SubClass();

superClass1.someMethod(); // SuperClass version is called
superClass2.someMethod(); // SubClass version is called

Here, we see that even though both the object references superClass1 and superClass2 are of type 'SuperClass' only, but at run time they refer to the objects of types 'SuperClass' and 'SubClass' respectively.

Hence, at compile time the compiler can't be sure if the call to the method 'someMethod()' on these references actually refer to which version of the method - the super class version or the sub class version.

Thus, we see that dynamic binding in Java simply binds the method calls (inherited methods only as they can be overriden in a sub class and hence compiler may not be sure of which version of the method to call) based on the actual object type and not on the declared type of the object reference.

C) Object persistence
Ans:- Object persistence is an object-oriented, language-specific, transparent data storage and retrieval model used in computer programming. It is based on the techniques of system snapshotting and transaction journalling. The first usage of the term and generic, publicly-available implementation of object prevalence was Prevayler, written for Java by Klaus Wuestefeld in 2001.
In the prevalent model, the object data is kept in memory in native object format, rather than being marshalled to an RDBMS or other data storage system. A snapshot of data is regularly saved to disk, and in addition to this, all changes are serialised and a log of transactions is also stored on disk.Snapshots and transaction logs can be stored in language-specific serialization format for speed or in XStream (XML) format for cross-language portability.
• Simply keeping objects in memory in their normal, natural, language-specific format is both orders of magnitude faster and more programmer-friendly than the multiple conversions that are needed when the objects are stored and retrieved from an RDBMS.
• The application needs enough memory to hold the entire database in RAM (the "prevalent hypothesis"). Prevalence advocates claim this is continuously alleviated by decreasing RAM prices, and the fact that many business databases are small enough to fit in memory anyway.
• Programmers need skill in working with business objects natively in RAM, rather than using explicit API calls to store them and queries to retrieve them.
D) Meta classes
Ans:- In object-oriented programming, a metaclass is a class whose instances are classes. Just as an ordinary class defines the behavior of certain objects, a metaclass defines the behavior of certain classes and their instances. Not all object-oriented programming languages support metaclasses. Among those that do, the extent to which metaclasses can override any given aspect of class behavior varies. Each language has its own metaobject protocol, a set of rules that govern how objects, classes, and metaclasses interact
class Car(object):
__slots__ = ['make', 'model', 'year', 'color']

def __init__(self, make, model, year, color):
self.make = make
self.model = model
self.year = year
self.color = color

def description(self):
""" Return a description of this car. """
return "%s %s %s %s" % (self.color, self.year, self.make, self.model)
At run time, Car itself an instance of type. The source code of the Car class, shown above, does not include such details as the size in bytes of Car objects, their binary layout in memory, how they are allocated, that the __init__ method is automatically called each time a Car is created, and so on. These details come into play not only when a new Car object is created, but also each time any attribute of a Car is accessed. In languages without metaclasses, these details are defined by the language specification and can't be overridden. In Python, the metaclass, type, controls these details of Car's behavior. They can be overridden by using a different metaclass instead of type.
The above example contains some redundant code to do with the four attributes make, model, year, and color. It is possible to eliminate some of this redundancy using a metaclass. In Python, a metaclass is most easily defined as a subclass of type.
class AttributeInitType(type):
def __call__(self, *args, **kwargs):
""" Create a new instance. """

# First, create the object in the normal default way.
obj = type.__call__(self, *args)

# Additionally, set attributes on the new object.
for name in kwargs:
setattr(obj, name, kwargs[name])

# Return the new object.
return obj
This metaclass only overrides object creation. All other aspects of class and object behavior are still handled by type.
Now the class Car can be rewritten to use this metaclass. This is done in Python 2 by assigning to __metaclass__ within the class definition (in Python 3 you provide a named argument, metaclass=M to the class definition instead):
class Car(object):
__metaclass__ = AttributeInitType
__slots__ = ['make', 'model', 'year', 'color']

def description(self):
""" Return a description of this car. """
return "%s %s %s %s" % (self.color, self.year, self.make, self.model)
Car objects can then be instantiated like this:
cars = [
Car(make='Toyota', model='Prius', year=2005, color='green'),
Car(make='Ford', model='Prefect', year=1979, color='blue')]

3. Describe the following with suitable real time examples:
A) The software development process
Ans:- A software development process is a structure imposed on the development of a software product. Synonyms include software life cycle and software process. There are several models for such processes, each describing approaches to a variety of tasks or activities that
The largely growing body of software development organizations implement process methodologies. Many of them are in the defense industry, which in the U.S. requires a rating based on 'process models' to obtain contracts.The international standard for describing the method of selecting, implementing and monitoring the life cycle for software is ISO 12207.
A decades-long goal has been to find repeatable, predictable processes that improve productivity and quality. Some try to systematize or formalize the seemingly unruly task of writing software. Others apply project management techniques to writing software. Without project management, software projects can easily be delivered late or over budget. With large numbers of software projects not meeting their expectations in terms of functionality, cost, or delivery schedule, effective project management appears to be lacking.
Organizations may create a Software Engineering Process Group (SEPG), which is the focal point for process improvement. Composed of line practitioners who have varied skills, the group is at the center of the collaborative effort of everyone in the organization who is involved with software engineering process improvement.
Software development activities

The activities of the software development process represented in the waterfall model. There are several other models to represent this process.
The important task in creating a software product is extracting the requirements or requirements analysis. Customers typically have an abstract idea of what they want as an end result, but not what software should do. Incomplete, ambiguous, or even contradictory requirements are recognized by skilled and experienced software engineers at this point. Frequently demonstrating live code may help reduce the risk that the requirements are incorrect.
Once the general requirements are gleaned from the client, an analysis of the scope of the development should be determined and clearly stated. This is often called a scope document.
Certain functionality may be out of scope of the project as a function of cost or as a result of unclear requirements at the start of development. If the development is done externally, this document can be considered a legal document so that if there are ever disputes, any ambiguity of what was promised to the client can be clarified.
Implementation, testing and documenting
Implementation is the part of the process where software engineers actually program the code for the project.
Software testing is an integral and important part of the software development process. This part of the process ensures that bugs are recognized as early as possible.
Documenting the internal design of software for the purpose of future maintenance and enhancement is done throughout development. This may also include the authoring of an API, be it external or internal.
Deployment and maintenance
Deployment starts after the code is appropriately tested, is approved for release and sold or otherwise distributed into a production environment.
Software Training and Support is important because a large percentage of software projects fail because the developers fail to realize that it doesn't matter how much time and planning a development team puts into creating software if nobody in an organization ends up using it. People are often resistant to change and avoid venturing into an unfamiliar area, so as a part of the deployment phase, it is very important to have training classes for new clients of your software.
Maintenance and enhancing software to cope with newly discovered problems or new requirements can take far more time than the initial development of the software. It may be necessary to add code that does not fit the original design to correct an unforeseen problem or it may be that a customer is requesting more functionality and code can be added to accommodate their requests. It is during this phase that customer calls come in and you see whether your testing was extensive enough to uncover the problems before customers do. If the labour cost of the maintenance phase exceeds 25% of the prior-phases' labor cost, then it is likely that the overall quality, of at least one prior phase, is poor. In that case, management should consider the option of rebuilding the system (or portions) before maintenance cost is out of control.
Bug Tracking System tools are often deployed at this stage of the process to allow development teams to interface with customer/field teams testing the software to identify any real or perceived issues. These software tools, both open source and commercially licensed, provide a customizable process to acquire, review, acknowledge, and respond to reported issues.
Software Development Models
Several models exist to streamline the development process. Each one has its pros and cons, and it's up to the development team to adopt the most appropriate one for the project. Sometimes a combination of the models may be more suitable.
Waterfall Model
• The waterfall model shows a process, where developers are to follow these phases in order:
• Requirements specification (Requirements Analysis)
• Design
• Implementation (or Coding)
• Integration
• Testing (or Validation)
• Deployment (or Installation)
• Maintenance
In a strict Waterfall model, after each phase is finished, it proceeds to the next one. Reviews may occur before transitioning to the next phase which allows for the possibility of changes (which may involve a formal change control process). However, it discourages revisiting and revising any prior phase once it's complete. This "inflexibility" in a pure Waterfall model has been a source of criticism by other more "flexible" models.
Spiral Model
The key characteristic of a Spiral model is risk management at regular stages in the development cycle.
Iterative and Incremental Development
Iterative development[1] prescribes the construction of initially small but ever larger portions of a software project to help all those involved to uncover important issues early before problems or faulty assumptions can lead to disaster. Iterative processes are preferred by commercial developers because it allows a potential of reaching the design goals of a customer who does not know how to define what they want.
Agile Development
Agile software development uses iterative development as a basis but advocates a lighter and more people-centric viewpoint than traditional approaches. Agile processes use feedback, rather than planning, as their primary control mechanism. The feedback is driven by regular tests and releases of the evolving software.
There are many variations of agile processes.
XP (Extreme Programming)
In XP, the phases are carried out in extremely small (or "continuous") steps compared to the older, "batch" processes. The (intentionally incomplete) first pass through the steps might take a day or a week, rather than the months or years of each complete step in the Waterfall model. First, one writes automated tests, to provide concrete goals for development. Next is coding (by a pair of programmers), which is complete when all the tests pass, and the programmers can't think of any more tests that are needed. Design and architecture emerge out of refactoring, and come after coding. Design is done by the same people who do the coding. (Only the last feature — merging design and code — is common to all the other agile processes.) The incomplete but functional system is deployed or demonstrated for (some subset of) the users (at least one of which is on the development team). At this point, the practitioners start again on writing tests for the next most important part of the system.

Formal methods:-
Formal methods are mathematical approaches to solving software (and hardware) problems at the requirements, specification and design levels. Examples of formal methods include the B-Method, Petri nets, Automated theorem proving, RAISE and VDM. Various formal specification notations are available, such as the Z notation. More generally, automata theory can be used to build up and validate application behavior by designing a system of finite state machines.
Finite state machine (FSM) based methodologies allow executable software specification and by-passing of conventional coding (see virtual finite state machine or event driven finite state machine).
Formal methods are most likely to be applied in avionics software, particularly where the software is safety critical. Software safety assurance standards, such as DO178B demand formal methods at the highest level of categorization (Level A).
Formalization of software development is creeping in, in other places, with the application of Object Constraint Language (and specializations such as Java Modeling Language) and especially with Model-driven architecture allowing execution of designs, if not specifications.
Another emerging trend in software development is to write a specification in some form of logic (usually a variation of FOL), and then to directly execute the logic as though it were a program. The OWL language, based on Description Logic, is an example. There is also work on mapping some version of English (or another natural language) automatically to and from logic, and executing the logic directly. Examples are Attempto Controlled English, and Internet Business Logic, which does not seek to control the vocabulary or syntax. A feature of systems that support bidirectional English-logic mapping and direct execution of the logic is that they can be made to explain their results, in English, at the business or scientific level.
The Government Accountability Office, in a 2003 report on one of the Federal Aviation Administration’s air traffic control modernization programs,[2] recommends following the agency’s guidance for managing major acquisition systems by
• establishing, maintaining, and controlling an accurate, valid, and current performance measurement baseline, which would include negotiating all authorized, unpriced work within 3 months;
• conducting an integrated baseline review of any major contract modifications within 6 months; and
• preparing a rigorous life-cycle cost estimate, including a risk assessment, in accordance with the Acquisition System Toolset’s guidance and identifying the level of uncertainty inherent in the estimate.
B) Building high-quality software
Ans:- Which parameters make software applications high-quality? And which parameters or methods, while desirable, are not directly "quality"? This article, inspired by a post of someone else on a mailing list I started for one of my projects aims to answer this question.
What is a high-quality software application? I'm not talking about the application that's the most hyped, or the fastest, most featureful, or best. You can often hear endless rants and Fear, Uncertainty and Doubt (FUD) attacks about it. Often, it has competing programs that are superior in many respects. But at the end of the day, it is one of the most popular programs out there and is what many people like to use, what they find as the right tool for the job, and often what they are told (or even forced) to use.
How did I think about it? I am active in the fc-solve-discuss mailing list, which is dedicated to computerised techniques for solving the Freecell solitaire card game, and for discussing other automated solving and research of Solitaire games. This mailing list was started after I wrote Freecell Solver (with a lowercase "c" and an uppercase "S"), an open-source solver, which has proven to be quite popular. However, this mailing list was not restricted to discussing it exclusively, and was joined by many other Freecell solving experts, researchers and enthusiasts.
Later on, I lost interest in Freecell Solver, and have only been fixing bugs in the latest stable version (2.8.x). Since it is open source, and with an aim to be a Bazaar-like project I welcome other people to take over and am willing to help them, but it is still very functional as it is.
Recently, this message was written by Gary Campbell, who wrote:
I think the solver discussion in the Wikipedia should mention that the FCPro solvers give quite lengthy and virtually unusable solutions. No human wants to follow the several hundred steps that usually result in order to get at what's really required to solve a given layout. If someone wants a solution that can be understood, they want one that is under 100 steps. This is a major contribution and it should be stated. There are a lot of "toy" solvers and only a very few of "industrial strength." One of the latter is the solver by Danny Jones. I don't see that on your list. The results he has gotten from his solver pretty much put your solver to shame. The more you hype Freecell Solver, the more criticism you open yourself up to.
Unfortunately, it's hard for me to determine what "industrial strength", "enterprise grade" and other such buzzwords are. But I'll try to define high quality here, and try to show when a program is high quality, and when it is not exactly the case.
I initially wanted to give some examples for software that I considered to be of exceptional high-quality, but I decided against it. That's because it is a matter of taste if this is the case for them, and since it may provoke too much criticism against this essay as a whole. Thus, I'll just give some examples, possibly accompanied by screenshots, of places where one program does something better than a different program, while not even implying that the latter is in fact lower-quality in all possible respects.
Parameters of Quality
This section will cover the parameters that make software high quality. However, it will not cover the means to make it so. These include such non-essential things as having good, modular code [having_good_code] good marketing, or having good automated tests. These things are definitely important, but will be covered only later, and a lot of popular, high-quality software lacks some of them, while its competition may do better in this respect.
The Program is Available for Downloading or Buying
That may seem like a silly thing to say, but you'll be surprised how many times people get it wrong. How many times have you seen web-sites of software that claim that the new version of the software (or even the first) is currently under work, will change the world, but is not available yet? How many times have you heard of web-sites that are not live yet, and refuse to tell people exactly what they are about?
Alternatively, in the case of the "Stanford checker", which is a sophisticated tool for static code analysis, it is not available for download, but instead is a service provided by its parent company.
A software should be available in the wild somehow (for downloading or at least buying) so people can use it, play with it, become impressed or unimpressed, report bugs, and ask for new features. Otherwise it's just in-house software or at most a service, that is not adequate for most needs.
In the "Cathedral and the Bazaar", Eric Raymond recommends to "release Early and release Often". Make frequent, incremental releases, so your project won't stagnate. If you take your project and work on it yourself for too long, people will give up.
If you have a new idea for a program, make sure you implement some basic but adequate functionality, and then release it so people can play with it, and learn about it. Most successful projects that were open-source from the start have started this way: the Linux kernel, gcc, vim, perl, CPython. If you look at the earliest versions of them all, you'll find they were very limited if not downright hideous, but now they are often among the best of breed.
The Version Number is Clearly Indicated
Which version of your software are you using? How can you tell? It's not unusual to come to a page where the link to the archive does not contain a version number, nor is it clearly indicated anywhere. What happens if this number was bumped to indicate a bug fix. How do you indicate it then?
A good software always indicates its version number in the archive file name, the opening directory file name, has a --version command-line flag, and the version mentioned in the about dialogue if there is one.
The Source is Available, Preferably under a Usable Licence
Public availability of the source is a great advantage for a program to have. Many people will usually not take a look at it otherwise, for many reasons, some ideological but some also practical. (Assuming there was ever a true ideology that was not also practical.) Without the source, and without it being under a proper licence, the software will not become part of most distributions of Linux, the BSD operating systems, etc.
If you just say that your software "is copyright by John Doe. All Rights Reserved", then it may arguably induce an inability to study its internals (including to fix bugs or add features) without being restricted by a non-compete clause, or even that its use or re-distribution is restricted. Some software ship with extremely long, complicated (and often not entirely enforceable) End-User-Licence-Agreements (EULAs) that no-one reads or cares to understand.
As a result, many people will find a program with a licence that is not 100% Free and Open Source Software - unacceptable. To be truly useful the software needs to also be GPL compatible, and naturally usable public-domain licences such as the modified BSD licence, the MIT X11 licence, or even pure Public Domain source code[public-domain], are even better for their ability to be sub-licensed and re-used. This is while licenses that allow incorporation but not sub-licensing, like the Lesser General Public License (LGPL) are somewhere in between.
While some software on Linux has become popular despite being under non-optimal licences, many Linux distributions pride themselves that the core system consists of "100% free software". Most software that became non-open-source, was eventually forked or got into disuse, or else suffered from a lot of bad publicity.
As a result, a high-quality software has a licence that is the most usable in the context of its operation. These licences are doubly-important for freely-distributed UNIX software.
4. Describe the following Object Oriented Methodologies:
A) Patterns
Ans:- In software engineering, a design pattern is a general reusable solution to a commonly occurring problem in software design. A design pattern is not a finished design that can be transformed directly into code. It is a description or template for how to solve a problem that can be used in many different situations. Object-oriented design patterns typically show relationships and interactions between classes or objects, without specifying the final application classes or objects that are involved.
Design patterns reside in the domain of modules and interconnections. At a higher level there are Architectural patterns that are larger in scope, usually describing an overall pattern followed by an entire system.
Not all software patterns are design patterns. For instance, algorithms solve computational problems rather than software design problems.
Design patterns are composed of several sections (see Documentation below). Of particular interest are the Structure, Participants, and Collaboration sections. These sections describe a design motif: a prototypical micro-architecture that developers copy and adapt to their particular designs to solve the recurrent problem described by the design pattern. A micro-architecture is a set of program constituents (e.g., classes, methods...) and their relationships. Developers use the design pattern by introducing in their designs this prototypical micro-architecture, which means that micro-architectures in their designs will have structure and organization similar to the chosen design motif.
In addition to this, patterns allow developers to communicate using well-known, well understood names for software interactions. Common design patterns can be improved over time, making them more robust than ad-hoc designs.
Domain specific patterns
Efforts have also been made to codify design patterns in particular domains, including use of existing design patterns as well as domain specific design patterns. Examples include user interface design patterns,[6] information visualization [7], "secure usability"[8] and web design.[9]
The annual Pattern Languages of Programming Conference proceedings [10] include many examples of domain specific patterns.
The documentation for a design pattern describes the context in which the pattern is used, the forces within the context that the pattern seeks to resolve, and the suggested solution.[17] There is no single, standard format for documenting design patterns. Rather, a variety of different formats have been used by different pattern authors. However, according to Martin Fowler certain pattern forms have become more well-known than others, and consequently become common starting points for new pattern writing efforts.[18] One example of a commonly used documentation format is the one used by Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides (collectively known as the "Gang of Four", or GoF for short) in their book Design Patterns. It contains the following sections

B) Frameworks
Ans:- software framework, in computer programming, is an abstraction in which common code providing generic functionality can be selectively overridden or specialized by user code providing specific functionality. Frameworks are a special case of software libraries in that they are reusable abstractions of code wrapped in a well-defined API, yet they contain some key distinguishing features that separate them from normal libraries.
Software frameworks have these distinguishing features that separate them from libraries or normal user applications:
• inversion of control - In a framework, unlike in libraries or normal user applications, the overall program's flow of control is not dictated by the caller, but by the framework.[1]
• default behavior - A framework has a default behavior. This default behavior must actually be some useful behavior and not a series of no-ops.
• extensibility - A framework can be extended by the user usually by selective overriding or specialized by user code providing specific functionality
• non-modifiable framework code - The framework code, in general, is not allowed to be modified. Users can extend the framework, but not modify its code.
The designers of software frameworks aim to facilitate software development by allowing designers and programmers to devote their time to meeting software requirements rather than dealing with the more standard low-level details of providing a working system, thereby reducing overall development time.[2] For example, a team using a web application framework to develop a banking web site can focus on the operations of account withdrawals rather than the mechanics of request handling and state management.
It can be argued that frameworks add to "code bloat", and that due to competing and complementary frameworks and the complexity of their APIs, the intended reduction in overall development time may not be achieved due to the need to spend additional time learning to use the framework. However, it could be argued that once the framework is learned, future projects might be quicker and easier to complete. The most effective frameworks turn out to be those that evolve from re-factoring the common code of the enterprise, as opposed to using a generic "one-size-fits-all" framework developed by third-parties for general purposes.
Software frameworks typically contain considerable housekeeping and utility code in order to help bootstrap user applications, but generally focus on specific problem domains, such as:
• Artistic drawing, music composition, and mechanical CAD[3][4]
• Compilers for different programming languages and target machines.[5]
• Financial modeling applications[6]
• Earth system modeling applications[7]
• Decision support systems[8]
• Media playback and authoring
• Web applications
• Middleware
5. Describe the goals and scope of UML with suitable examples.
Ans:- The Unified Modeling Language (UML) is a standard language for specifying, visualizing, constructing, and documenting the artifacts of software systems, as well as for business modeling and other non-software systems. The UML represents a collection of best engineering practices that have proven successful in the modeling of large and complex systems.1 The UML is a very important part of developing object oriented software and the software development process. The UML uses mostly graphical notations to express the design of software projects. Using the UML helps project teams communicate, explore potential designs, and validate the architectural design of the software.
Unified Modeling Language (UML) is a standardized general-purpose modeling language in the field of software engineering. The standard is managed, and was created by, the Object Management Group.UML includes a set of graphical notation techniques to create visual models of software-intensive systems.
UML combines best techniques from data modeling (entity relationship diagrams), business modeling (work flows), object modeling, and component modeling. It can be used with all processes, throughout the software development life cycle, and across different implementation technologies.[3] UML has synthesized the notations of the Booch method, the Object-modeling technique (OMT) and Object-oriented software engineering (OOSE) by fusing them into a single, common and widely usable modeling language. UML aims to be a standard modeling language which can model concurrent and distributed systems. UML is a de facto industry standard, and is evolving under the auspices of the Object Management Group (OMG). OMG initially called for information on object-oriented methodologies that might create a rigorous software modeling language. Many industry leaders have responded in earnest to help create the UML standard.[1]
UML models may be automatically transformed to other representations (e.g. Java) by means of QVT-like transformation languages, supported by the OMG. UML is extensible, offering the following mechanisms for customization: profiles and stereotype. The semantics of extension by profiles have been improved with the UML 2.0 major revision.
Software Development Methods
UML is not a development method by itself,[8] however, it was designed to be compatible with the leading object-oriented software development methods of its time (for example OMT, Booch method, Objectory). Since UML has evolved, some of these methods have been recast to take advantage of the new notations (for example OMT), and new methods have been created based on UML. The best known is IBM Rational Unified Process (RUP). There are many other UML-based methods like Abstraction Method, Dynamic Systems Development Method, and others, designed to provide more specific solutions, or achieve different objectives.
It is very important to distinguish between the UML model and the set of diagrams of a system. A diagram is a partial graphical representation of a system's model. The model also contains a "semantic backplane" — documentation such as written use cases that drive the model elements and diagrams.
UML diagrams represent two different views of a system model
• Static (or structural) view: Emphasizes the static structure of the system using objects, attributes, operations and relationships. The structural view includes class diagrams and composite structure diagrams.
• Dynamic (or behavioral) view: Emphasizes the dynamic behavior of the system by showing collaborations among objects and changes to the internal states of objects. This view includes sequence diagrams, activity diagrams and state machine diagrams.
• UML models can be exchanged among UML tools by using the XMI interchange format
Example:- Use Case diagram

6. Explain the following with respect to UML Architecture:
A) Four-Layer Meta model Architecture
Ans:- The meta-metamodeling layer forms the foundation for the metamodeling architecture. The primary responsibility of this layer is to define the language for specifying a metamodel. A meta-metamodel defines a model at a higher level of abstraction than a metamodel, and is typically more compact than the metamodel that it describes. A meta-metamodel can define multiple metamodels, and there can be multiple meta-metamodels associated with each metamodel.( 1 If there is not an explicit meta-metamodel, there is an implicit meta-metamodel associated with every
metamodel). While it is generally desirable that related metamodels and meta-metamodels share common design philosophies and constructs, this is not a strict rule. Each layer needs to maintain its own design integrity. Examples of meta-metaobjects in the meta-metamodeling layer are: MetaClass, MetaAttribute, and MetaOperation.

A metamodel is an instance of a meta-metamodel. The primary responsibility of the metamodel
layer is to define a language for specifying models. Metamodels are typically more elaborate than
the meta-metamodels that describe them, especially when they define dynamic semantics.
Examples of metaobjects in the metamodeling layer are: Class, Attribute, Operation, and

A model is an instance of a metamodel. The primary responsibility of the model layer is to define a
language that describes an information domain. Examples of objects in the modeling layer are:
StockShare, askPrice, sellLimitOrder, and StockQuoteServer.

User objects (a.k.a. user data) are an instance of a model. The primary responsibility of the user
objects layer is to describe a specific information domain. Examples of objects in the user objects
Four Layer Metamodeling Architecture

Layer Description Example
meta-metamodel The infrastructure for a metamodeling
architecture. Defines the language for
specifying metamodels. MetaClass, MetaAttribute,
metamodel An instance of a meta-metamodel.
Defines the language for specifying a
model. Class, Attribute, Operation,
model An instance of a metamodel. Defines
a language to describe an information
domain. StockShare, askPrice, sellLimitOrder,
user objects
(user data) An instance of a model. Defines a
specific information domain. ,
654.56, sell_limit_order,

B)Package Structure
Ans:- Tasks in IRAF (and external packages such as STSDAS) are organized by package in the cl. The structure directories containing the source and run-time files reflects the package structure apparent from the cl. For example, in the case of STSDAS, each package resides in a directory under the stsdas root directory just as the STSDAS packages are organized under the stsdas package in the cl. There are several files common to the package as a whole and several similar files required for each task in the package. These files need to be modified when installing a new task. The required common files in the package directory are:
• package.cl - Package cl procedure, cl task definitions
• x_package.x - SPP task definitions
• mkpkg - How to build the package
In the above file names, the name of the package is used in place of package. For example, the playpen package in STSDAS is in the directory stsdas$pkg/playpen and the procedure script is called playpen.cl. In addition, documentation files exist in the package level directory as well as a doc directory containing individual help files for the tasks in the package.
• package.hd - Help database pointers
• package.hlp - Package level help
• package.men - Package menu, one line task descriptions
• doc - Directory containing task help files
Tasks in the Package
Each task has additional files, the type of which depends on the nature of the task. These files would be added when you install a new task. Each task must also have entries in the package files. A cl procedure task requires only a task.cl file in the package directory, containing the cl statements and parameter definitions. For example, disconlab.cl in the playpen package. It also requires a task.hlp file in the doc subdirectory. An SPP (physical) task requires SPP source, at least one source file, by convention called t_task.x (with task replaced by the task name) playpen$t_wcslab.x, for example. Additional source files may reside in the package directory or in subdirectories. The task may use an include (header) file with the name task.h, playpen$wcslab.h, e.g. Each task requires a parameter file (unless it is a script, defined by a .cl file), task.par, containing definitions of the task parameters, such as playpen$wcslab.par. The doc directory contains the task help files, one for each task in the package.

The procedure, then, is to develop the application in a private directory with a structure similar to the intended target package. Development should be done in a local user directory rather than the system directories, not even the development system. Use an existing package as an example of how to proceed. When you are ready to install the package, copy the task files to the intended package directory and edit the existing package files to include references to the new package. Run mkpkg to rebuild the package with the changes (the added task). When you are satisfied that things work, run mkpkg install to move the executable to the appropriate binaries directory.
C) Levels of Formalism
Ans:- There are multiple levels of scientific formalism possible. At the lowest level, scientific formalism deals with the symbolic manner in which the information is presented. To achieve formalism in a scientific theory at this level, one starts with a well defined set of axioms, and from these follows a formal system.
However, at a higher level, scientific formalism also involves consideration of the axioms themselves. These can be viewed as questions of ontology. For example, one can, at the lower level of formalism, define a property called 'existence'. However, at the higher level, the question of whether an electron exists in the same sense that a bacterium exists still needs to be resolved. Some actual formal theories on facts have been proposed.
The scientific climate of the twentieth century revived these questions. From about the time of Isaac Newton to that of James Clerk Maxwell they had been dormant, in the sense that the physical sciences could rely on the status of the real numbers as a description of the continuum, and an agnostic view of atoms and their structure. Quantum mechanics, the dominant physical theory after about 1925, was formulated in a way which raised questions of both types.
In the Newtonian framework there was indeed a degree of comfort in the answers one could give. Consider for example the question of whether the Earth really goes round the Sun. In a frame of reference adapted to calculating the Earth's orbit, this is a mathematical but also tautological statement. Newtonian mechanics can answer the question, whether it is not equally the case that the Sun goes round the Earth, as it indeed appears to Earth-based astronomers. In Newton's theory there is a basic, fixed frame of reference that is inertial. The 'correct answer' is that the point of view of an observer in an inertial frame of reference is privileged: other observers see artifacts of their acceleration relative to an inertial frame. Before Newton, Galileo would draw the consequences, from the Copernican heliocentric model. He was, however, constrained to call his work (in effect) scientific formalism, under the old 'description' saving the phenomena. To avoid going against authority, the elliptic orbits of the heliocentric model could be labelled as a more convenient device for calculations, rather than an actual description of reality.
In general relativity, Newton's inertial frames are no longer privileged. In quantum mechanics, Paul Dirac argued that physical models were not there to provide semantic constructs allowing us to understand microscopic physics in language comparable to that we use on the familiar scale of everyday objects. His attitude, adopted by many theoretical physicists, is that a good model is judged by our capacity to use it to calculate physical quantities that can be tested experimentally. Dirac's view is close to what Bas van Fraassen calls constructive empiricism.

D) Naming Conventions and Typography
Ans:- Naming convention is a convention for naming things. The intent is to allow useful information to be deduced from the names based on regularities. For instance, in Manhattan, streets are numbered, with East-West streets being called "Streets" and North-South streets called "Avenues".
Well-chosen naming conventions aid the casual user in navigating larger structures. Several areas where naming conventions are commonly used include:
• In computer programming, identifier naming conventions
• In the sciences, systematic names for a variety of things
• In astronomy, planetary nomenclature
• In classics, Roman naming conventions
• In industry, product naming conventions
A naming convention may be followed when:
• Large corporate, university, or government campuses may name rooms within the buildings to help orient tenants and visitors.
• Children's names may be alphabetical by birth order. In some Asian cultures, it is common for the middle name to be common for immediate siblings. In many cultures it is common for the son to be named after the father.[1] In other cultures, the name may include the place of residence.[2] Roman naming convention denotes social rank.
• Products. Automobiles typically have a binomial name, a "make" (manufacturer) and a "model", in addition to a model year. Computers often have increasing numbers in their names to signify the successive generations.
• School courses: an abbreviation for the subject area and then a number ordered by increasing level of difficulty.
• Virtually all organizations that assign names or numbers follow some convention in generating these identifiers (e.g. phone numbers, bank accounts, government IDs, credit cards, etc).

Typographic convention Description
courier font Indicates a literal value, such as a command name, filename, information that you type, or information that the system prints on the screen.
bold Indicates a new term the first time that it appears.
italic, italic Indicates a variable name or a cross-reference.
blue outline A blue outline, which is visible only when you view the manual online, indicates a cross-reference hyperlink. Click inside the outline to jump to the object of the reference.
{ } In a syntax line, curly braces surround a set of options from which you must choose one and only one.
[ ] In a syntax line, square brackets surround an optional parameter.
... In a syntax line, ellipses indicate a repetition of the previous parameter. For example, option[,...] means that you can enter multiple, comma-separated options.
< > In a naming convention, angle brackets surround individual elements of a name to distinguish them from each other, as in tmp.log.
/, \ In this document, backslashes (\) are used as the convention for directory paths. For UNIX installations, substitute slashes (/) for backslashes. All product pathnames are relative to the directory where the product is installed on your system.
%text% and $text Text within percent (%) signs indicates the value of the Windows text system variable or user variable. The equivalent notation in a UNIX environment is $text, indicating the value of the text UNIX environment variable.
ProductDir Represents the directory where the product is installed.
Typography is the art and technique of arranging type, type design, and modifying type glyphs. Type glyphs are created and modified using a variety of illustration techniques. The arrangement of type involves the selection of typefaces, point size, line length, leading (line spacing), adjusting the spaces between groups of letters (tracking) and adjusting the space between pairs of letters (kerning). Typography is performed by typesetters, compositors, typographers, graphic designers, art directors, comic book artists, graffiti artists, and clerical workers. Until the Digital Age, typography was a specialized occupation. Digitization opened up typography to new generations of visual designers and lay users.

-02 –
Object Oriented Analysis & Design using UML

1. Illustrate various Diagram Elements in the context of UML Notation guide.
Ans:- Unified Modeling Language (UML) is a standardized general-purpose modeling language in the field of software engineering. The standard is managed, and was created by, the Object Management Group.UML includes a set of graphical notation techniques to create visual models of software-intensive systems.
UML combines best techniques from data modeling (entity relationship diagrams), business modeling (work flows), object modeling, and component modeling. It can be used with all processes, throughout the software development life cycle, and across different implementation technologies.[3] UML has synthesized the notations of the Booch method, the Object-modeling technique (OMT) and Object-oriented software engineering (OOSE) by fusing them into a single, common and widely usable modeling language. UML aims to be a standard modeling language which can model concurrent and distributed systems. UML is a de facto industry standard, and is evolving under the auspices of the Object Management Group (OMG). OMG initially called for information on object-oriented methodologies that might create a rigorous software modeling language. Many industry leaders have responded in earnest to help create the UML standard.[1]
UML models may be automatically transformed to other representations (e.g. Java) by means of QVT-like transformation languages, supported by the OMG. UML is extensible, offering the following mechanisms for customization: profiles and stereotype. The semantics of extension by profiles have been improved with the UML 2.0 major revision.
Development toward UML 2.0
UML has matured significantly since UML 1.1. Several minor revisions (UML 1.3, 1.4, and 1.5) fixed shortcomings and bugs with the first version of UML, followed by the UML 2.0 major revision that was adopted by the OMG in 2005[6].
There are four parts to the UML 2.x specification:
the Superstructure that defines the notation and semantics for diagrams and their model elements;
the Infrastructure that defines the core metamodel on which the Superstructure is based;
the Object Constraint Language (OCL) for defining rules for model elements;
and the UML Diagram Interchange that defines how UML 2 diagram layouts are exchanged.
The current versions of these standards follow: UML Superstructure version 2.2, UML Infrastructure version 2.2, OCL version 2.0, and UML Diagram Interchange version 1.0[7].
Although many UML tools support some of the new features of UML 2.x, the OMG provides no test suite to objectively test compliance with its specifications.
Diagrams overview
UML 2.2 has 14 types of diagrams divided into two categories.[10] Seven diagram types represent structural information, and the other seven represent general types of behavior, including four that represent different aspects of interactions. These diagrams can be categorized hierarchically as shown in the following class diagram:

UML does not restrict UML element types to a certain diagram type. In general, every UML element may appear on almost all types of diagrams; this flexibility has been partially restricted in UML 2.0. UML profiles may define additional diagram types or extend existing diagrams with additional notations.
In keeping with the tradition of engineering drawings, a comment or note explaining usage, constraint, or intent is allowed in a UML diagram.
2. Describe the theory along with real time examples of the following concepts:
A) Nested Class Declarations, Type and Implementation Class
Ans:- Nested Class Declarations A class can be declared within the scope of another class. Such a class is called a "nested class." Nested classes are considered to be within the scope of the enclosing class and are available for use within that scope. To refer to a nested class from a scope other than its immediate enclosing scope, you must use a fully qualified name.
The following example shows how to declare nested classes:

Copy Code
// nested_class_declarations.cpp
class BufferedIO
enum IOError { None, Access, General };

// Declare nested class BufferedInput.
class BufferedInput
int read();
int good()
return _inputerror == None;
IOError _inputerror;

// Declare nested class BufferedOutput.
class BufferedOutput
// Member list

int main()
BufferedIO::BufferedInput and BufferedIO::BufferedOutput are declared within BufferedIO. These class names are not visible outside the scope of class BufferedIO. However, an object of type BufferedIO does not contain any objects of types BufferedInput or BufferedOutput.
Nested classes can directly use names, type names, names of static members, and enumerators only from the enclosing class. To use names of other class members, you must use pointers, references, or object names.
In the preceding BufferedIO example, the enumeration IOError can be accessed directly by member functions in the nested classes, BufferedIO::BufferedInput or BufferedIO::BufferedOutput, as shown in function good.
Type and Implementation Class
A type is a class that may have attributes, associations, and operations, but does not have any methods. A type defines a role an object may play relative to other objects, similar to how a rolename indicates the role a class plays relative to other classes in an association. For example, a Worker object may play the role of a project manager, resource manager, human resource, or system administrator. A type is shown as a class marked with the type keyword.

Types are commonly used during analysis activities within a development process to identify the kinds of objects a system may require. You can think of types as conceptual classes, because they are ideas for possible classes. Also, because types do not have methods and represent roles only, they do not have instances.
Types may be used with binary and n-ary association and link ends. A comma-separated list of one or more type names may be placed following a rolename to indicate the roles a class or object plays in the relationship. Separate the rolename from the list of types using a colon. If no rolename is used, the type names are placed following a colon.
Figure 3-26 uses the types from Figure 3-25 to update Figure 3-13. It shows the various roles a worker may play relative to work products and units of work. A worker may be a project manager, resource manager, human resource, and system administrator.
Figure 3-26. Types with association ends

B) Interfaces and Parameterized Class (Template)

3. Describe the following UML diagrams with real time examples:
A) Sequence Diagrams
Ans:- sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram that shows how processes operate with one another and in what order. It is a construct of a Message Sequence Chart.
Sequence diagrams are sometimes called Event-trace diagrams, event scenarios, and timing diagrams.
A sequence diagram shows, as parallel vertical lines (lifelines), different processes or objects that live simultaneously, and, as horizontal arrows, the messages exchanged between them, in the order in which they occur. This allows the specification of simple runtime scenarios in a graphical manner.
For instance, the UML 1.x diagram on the right describes the sequences of messages of a (simple) restaurant system. This diagram represents a Patron ordering food and wine, drinking wine then eating the food, and finally paying for the food. The dotted lines extending downwards indicate the timeline. Time flows from top to bottom. The arrows represent messages (stimuli) from an actor or object to other objects. For example, the Patron sends message 'pay' to the Cashier. Half arrows indicate asynchronous method calls.
The UML 2.0 Sequence Diagram supports similar notation to the UML 1.x Sequence Diagram with added support for modeling variations to the standard flow of events.
Diagram building blocks
the lifeline is that of an object, it demonstrates a role. Note that leaving the instance name blank can represent anonymous and unnamed instances.
In order to display interaction, messages are used. These are horizontal arrows with the message name written above them. Solid arrows with full heads are synchronous calls, solid arrows with stick heads are asynchronous calls and dashed arrows with stick heads are return messages. This definition is true as of UML 2, considerably different from UML 1.x.
Activation boxes, or method-call boxes, are opaque rectangles drawn on top of lifelines to represent that processes are being performed in response to the message (ExecutionSpecifications in UML).
Objects calling methods on themselves use messages and add new activation boxes on top of any others to indicate a further level of processing.
When an object is destroyed (removed from memory), an X is drawn on top of the lifeline, and the dashed line ceases to be drawn below it (this is not the case in the first example though). It should be the result of a message, either from the object itself, or another.
A message sent from outside the diagram can be represented by a message originating from a filled-in circle (found message in UML) or from a border of sequence diagram (gate in UML).
UML 2 has introduced significant improvements to the capabilities of sequence diagrams. Most of these improvements are based on the idea of interaction fragments[2] which represent smaller pieces of an enclosing interaction. Multiple interaction fragments are combined to create a variety of combined fragments[3], which are then used to model interactions that include parallelism, conditional branches, optional interactions etc.

B) Collaboration Diagrams

A collaboration diagram, also called a communication diagram or interaction diagram, is an illustration of the relationships and interactions among software objects in the Unified Modeling Language (UML). The concept is more than a decade old although it has been refined as modeling paradigms have evolved.
A collaboration diagram resembles a flowchart that portrays the roles, functionality and behavior of individual objects as well as the overall operation of the system in real time. Objects are shown as rectangles with naming labels inside. These labels are preceded by colons and may be underlined. The relationships between the objects are shown as lines connecting the rectangles. The messages between objects are shown as arrows connecting the relevant rectangles along with labels that define the message sequencing.
Collaboration diagrams are best suited to the portrayal of simple interactions among relatively small numbers of objects. As the number of objects and messages grows, a collaboration diagram can become difficult to read. Several vendors offer software for creating and editing collaboration diagrams.
4. Explain the theory of UML Profile for Business Modeling.
Ans:- The Rational UML profile for business modeling is a component of the Rational Unified Process (RUP). It presents a UML language for capturing business models and is supported by the Business Modeling Discipline in the RUP.
This UML profile 1 is a component of the Rational Unified Process® (RUP®). It presents a UML language for capturing Business Models and is supported by the Business Modeling Discipline in the RUP. This profile is intended to enable UML tools to be used in the area of business engineering. This involves diverse disciplines such as business information modeling, business organization modeling, and business process modeling, as well as high-level concept and goal modeling that act as the requirements for the activities of the business. This will form both a foundation for a new class of UML tools and an interchange semantic between existing UML tools and other business engineering tools.
The RUP Business Modeling profile has recently been extended and updated to allow for the capture of more information regarding business context and business processes. Early versions of the RUP business modeling discipline were intended for a very basic capture of business information-just enough, to understand the requirements for the development of an application supporting the business. The goal of this update is to broaden the concepts and capabilities of the profile to capture more information and more fidelity in the model.
The Business Modeling profile is based on prior work by Rational Software and Objectory, and is also used as an example profile documented in the OMG UML 1.2, 1.3, and 1.4 language specifications.
Overview of the UML profile for business modeling
This section presents the following topics:
• Conceptual Model
• Structure of the Profile
• Identified Subset of the UML
Conceptual Model
The following UML diagram acts as a guide to the profile and demonstrates the important concepts of the profile and the relationships between these concepts. Note that the Conceptual Model follows the same basic structure as the profile itself, use case, domain, and resource models.

Figure 1. The conceptual model relationships

Structure of the profile
Internally, within the definition of the profile, we separate the elements into a number of packages, as shown below. This organization is not reflected in the end-user visible profile but does provide some guidance in how one might structure a model to make best use of the elements we provide.
Package structure of the profile

The set of packages is organized around the three models that make up the artifacts for the RUP Business Modeling Workflow. Note, however, that in the UML, a profile is a flat namespace when consumed by the user; the packages are therefore for the organization of the profile during its development and have no impact or meaning to the end user of a tool that implements it.
5. Explain the following with respect to Object Constraint Language:
A) Basic Values and Types
Ans:- Introduction
The Object Constraint Language (OCL) is a formal language used to express constraints in an unambiguous way. Expressions written in OCL can return single values and collections of objects, but they do not alter the state of the objects.
With Delphi s ECO II framework, you can use OCL throughout the process of designing your application. OCL is available to:
• Specify the values of derived attributes of classes.
• Specify the values of derived association ends.
• Specify constraints for your objects.
• Configure ECO handles, specifying the values or objects to which the handles refer.
• Evaluate dynamically built OCL queries and expressions at runtime.
• Translate OCL queries to SQL to be efficiently executed in the database.
This article presents a brief introduction to OCL. It shows some of the common operations you can perform, but is not meant to be an exhaustive reference. The official OCL specification in PDF form can be found at the Object Management Group (OMG) website (http://www.omg.org/cgi-bin/doc?formal/03-03-13).
Establishing the Context of an OCL Expression
OCL expressions are evaluated in the context of a data type. Typically this type is a class declared in the model. Within the expression, the keyword self refers to the instance of that data type. For example, if the context is a class called Person, and the expression is
The OCL Specification includes four predefined, basic data types. These are:
• Integer
• Real
• Boolean
• String
In addition to these intrinsic types, OCL expressions can refer to any data type defined in the model. These additional types can be either values or objects.
The OCL specification includes basic collection types:
• Set: Unordered, no duplicates
• Sequence: Ordered, no duplicates
• Bag: Unordered, duplicates allowed
Basic Anatomy of an OCL Expression
An OCL expression consists of a context, and navigation from that context to the target value you are interested in. For example, consider the following class diagram:

If the context is Teacher, the following are valid OCL expressions:
self.firstName Results in a value of type string
self.lastName Results in a value of type string
self.classRoomAssignment Results in a value of type string
self.Courses Results in a collection of Course objects (because the multiplicity of the association is 1..*)
self.hireDate Results in a DateTime object
self.Courses.numberOfStudents Results in a collection of integer values (one for each course)
self.Courses.numberOfStudents->sum Results in a value of type integer (the total amount of students in the courses of the teacher)
If the context is Course, the expression
results in a single instance of a Teacher (because the multiplicity of that association is 1). If the multiplicity had been 0..1, the result would have been either an instance of Teacher, or a null value.
ECO deals with navigation on NULL values differently than you might expect. In the context of a Course, the expression
is valid and returns the name of the instructor if there is one and an empty string if the Course has no instructor.
Expressions are evaluated from left to right to get the type and value of the result.
B) Objects and Properties
Ans:- The Object Constraint Language (OCL), which forms part of the UML 1.1. set of modelling notations is a precise, textual language for expressing constraints that cannot be shown in the standard diagrammatic notation used in UML. A semantics for OCL lays the foundation for building CASE tools that support integrity checking of whole UML models, not just the component expressed using OCL. This paper provides a semantics for OCL, at the same time providing a semantics for classes, associations, attributes and states
Operations on Types
In OCL, types themselves have predefined operations that can be performed on them. Operations are defined on both value types, and on collections. If the operation is specified on a basic type, use the dot to continue the expression. If the operation is specified on a collection, use the -> operator to continue the expression.
For example, substring is a predefined OCL operation defined for the string type. The following expression
returns the number of characters in the firstName attribute.
However, in the expression
self.Courses results in a collection of all Course objects. The arrow operator is used instead of the dot. The expression returns the number of elements in the collection.
Some operations take parameters. When this is the case, the parameter list is enclosed in parentheses. For operations that take no parameters, such as size, the OCL specification calls for parentheses with an empty argument list. However, in ECO, you can omit the empty parentheses.
Operations on Basic Types
The following table shows some of the operations defined by the OCL specification for the basic types.
Integer • =, +, -, /, *
• abs()
• div(i: Integer)
• mod(i: Integer)
• max(i: Integer)
• min(i: Integer)
Real • =, <>, <, >, <=, >=
• +, -, /, *
• abs()
• floor()
• round()
• max(r: Real)
• min(r: Real)
Boolean • =
• or, xor, and, not
String • =
• length()
Note: The OCL specification defines the size() operation on strings. ECO uses the name length().
• toUpper()
• toLower()
• subString(low: Integer, high: Integer)
• concat(s: String)
Note: In ECO, the operator + is defined as a string concatenator.
For all types, the operators +, , *, / <, >, <>, <=, >=, div, mod, and, or, xor, may be written using infix notation. The following two expressions are equivalent:
a + b
Perhaps the most common operation performed on a type is the allInstances operation. The allInstances operation retrieves a collection of all instances of the given type. The expression
will result in a collection of all Teacher objects in the ECO space.
Operations on Meta Types
The following table shows other operations that are defined for all types.
typeName Returns the name of the type.
attributes Returns the set of attributes of the type.
associationEnds Returns the set of navigable association ends.
supertypes Returns the set of all direct supertypes.
allSuperTypes Returns the entire set of supertypes.
allSubClasses Returns the set of all subclasses defined on the type.

Other type-related operations
oclIsKindOf(aType) Returns true if the value is of the specified type or one of its subtypes.
oclIsTypeOf(aType) Returns true if the value is of the specified type exactly.
oclAsType(aType) Returns the same value, but typed as the specified type. A runtime exception is thrown if the typecast fails.

Operations on Collections
The following table shows some of the common operations performed on collections. It is not an exhaustive list.
size() Returns the number of elements in the collection.
includes(object) Returns true if the collection contains the given object.
excludes(object) Returns true if the collection does not include the given object.
count(object) Returns the number of times object occurs in the collection.
isEmpty() Returns true if the collection is empty.
notEmpty() Returns true if the collection is not empty.

There is a special construct in OCL called iterators. Iterators are defined for all collections, and behave different from normal operations. For example:
Teacher.allInstances()->select(courses->size() > 2)
The above expression will iterate over all the teacher objects, and return a new collection with the teachers that fulfill the condition in the select statement. Normally in an expression it is possible to omit the self keyword since this is implicit. Inside an iteration the implicit variable is the loop-variable. The courses in the example will be applied to an implicit variable of the type Teacher (regardless of the context of the expression). The following table shows the different iterators in OCL.
select(booleanexpr) Returns the collection of elements that yields true.
reject(booleanexpr) Returns the collection of elements that yields false.
orderBy(anyexpr) Returns the same set of elements but ordered according to the anyexpr.
orderDescending(anyexpr) Reverse order compared to previous.
forAll(booleanexpr) Returns true if all of the objects in the collection yields true.
exists(booleanexpr) Returns true if one of the object in the collection yields true .
collect Returns a collection if the values returned by the iteration expression.
iterate Generic iteration in the OCL specification , but not implemented in ECO.
Iterators can be nested:
Teacher.allInstances()->select(courses->exists(numberOfStudents > 2))
It is also possible to make the implicit iterator variable explicit:
Teacher.allInstances()->select(t | t.courses->size() > 2)
Here the variable t is introduced and used to reference the loop variable.
6. Explain the theory of Collection Operations with respect to Object Constraint Language Specification.
Ans:- The OCL is used to represent constraints in the UML class diagram. Therefore, we parse OCL constraints and translate them into AsmL in order to enforce these constraints on the instance-level diagrams. The OCL parser module, together with the OCL library written in AsmL, support the checking of OCL constraints within a UML model that is translated into AsmL.
OCL Library in AsmL

We use a class diagram to represent the structure of OCL types defined in the UML specification. The figure above shows the metamodel for OCL that is used for the OCL library. The original design of the metamodel comes from the Dresden OCL Toolkit and has been modified to suit the needs and the limitations imposed by AsmL as a target language.
All basic OCL types (Boolean, String, Integer, and Real) inherit from the OclAny class, which is the supertype of all the basic and user-defined types. The OclType class has operations to retrieve type information from an OCL object and is implemented as a subtype of OclAny. The three OCL collection types, namely Set, Bag, and Sequence, inherit from an OclCollection class, which contains operations common to all the collection types. OclAny and OclCollection implement an OclRoot interface, which serves two functions. First, it connects all the OCL classes in the hierarchy together so that object-oriented features like polymorphism can be used. Second, it provides a general set of operations common to all the OCL types. OclEnum represents an enumeration in OCL, and we use constant values to define enumeration values. OclAnyImpl serves as the base class for all the classes defined by the user in the specification class diagram. This links the classes defined in the UML model with the rest of the OCL types so that we can perform OCL operations on them during the course of checking OCL constraints.
Basic OCL Type Operations
Operations for basic OCL types are encapsulations of the corresponding primitive operations of the basic types in AsmL. For example, the following code fragment shows the implementation of some operations of the OCL Real datatype:
type Real = Double
public class OclReal extends OclAny
public var val as Real
public addition (b as OclReal) as OclReal
return new OclReal (val + b.val)
public getVal() as Real = val
public setVal (b as Real)
val := b
The Real type in OCL is represented using the Double datatype in AsmL. As you can see, the addition operation of OclReal encapsulates the primitive addition operation of the basic AsmL datatype. All the basic type operations are implemented similarly.
Collection Type Operations
Collection types are encapsulated in a way similar to the basic data types because AsmL has support for collection types. Below is a fragment of the implementation of the OclCollection class, which is the base class of the three collection classes.
public abstract class OclCollection of T implements OclRoot
public var collSet as Set of T
public var collSequence as Seq of T
public var usingSet as Boolean
Classes for OCL collection types are implemented using type classes, similar to templates in C++. The OclCollection class has a couple of data members, two of which are mutually exclusive and is used depending whether the actual concrete derived class is a Set or either a Sequence or a Bag. We use an AsmL Seq to represent OCL sequences and bags because support for the Bag type in recent versions of AsmL caused problems which we were not able to fix yet. The usingSet member indicates to the common collection operations which of the two collection members to use depending on the actual concrete collection type.
Most of the OCL collection operations are implemented by translating their definitions given in the UML specification into AsmL. Wherever possible, they encapsulate the collection operations already available in AsmL. For example,
public size() as OclInteger
if isSet() then
return ToOclInteger (collSet.Size)
return ToOclInteger (collSequence.Size)

Iterate-based Collection Operations
There are a number of collection operations that have OCL expressions as parameters that work on all the elements of a collection. We call these operations iterate-based collection operations, which are OCL operations that can be can be described in terms of an iterate operation. Operations such as select, reject, collect, forAll, and exists fall into this category. In the UML specification, iterate is defined as follows:
collection->iterate( elem : Type; acc : Type = |
expression-with-elem-and-acc )
It is a generic collection operation that evaluates the given OCL expression for each element in the collection and the result of each evaluation is accumulated until the final result is returned to the caller. In terms of an AsmL-like pseudocode, an iterate expression looks like the following:
initialize accumulator
step foreach elem in collection
evaluate OCL expression on elem
update accumulator with result of previous evaluation
return accumulator result
For example, in the OCL specification the exists operation of a collection type is defined in terms of an iterate operation as
post: result = collection->iterate(elem; acc : Boolean = false |
acc or expr)
which means that the accumulator is a Boolean value that is initialized to false, and then for each element the Boolean expression on the element is OR-ed with the accumulator. The final value of the accumulator is then returned to the caller. The first step of the translated AsmL code would then initialize a Boolean accumulator to false. The second step would evaluate expr and then OR the result of that expression with the accumulator.
OCL Type Information
In the UML specification, the OclType operations provide type information. These type operations are implemented using the reflection capabilities of the .NET framework that are defined in the System.Reflection namespace. For example, the following code fragment is the implementation of the attributes() operation of the OclType class:
public attributes() as OclSet of OclString
var s as OclSet of OclString = new OclSet of OclString
var a as Set of OclString = {}
var f = objectType.GetFields ((System.Reflection.BindingFlags.Public
+ System.Reflection.BindingFlags.NonPublic +
System.Reflection.BindingFlags.Instance) as
step foreach x in (f as PrimitiveArray of System.Reflection.FieldInfo)
let temp = x.Name as String
if temp.StartsWith ("at_") then
p = new OclString((temp.Substring (3) as String))
add p to a
return s

OCL Parser
The OCL parser module contains a lexical analyzer, parser and a tree parser that are generated from the ANTLR parser generator. We use ANTLR because:
1. ANTLR generates LL-based recursive descent parsers which are easier to understand compared to bottom-up parsers.
2. ANTLR supports semantic and syntactic predicates in a grammar specification, which means that it has more than one look-ahead token to parse grammars with less ambiguity.
3. ANTLR can generate C++ parsers and is more suitable for the C++ implementation of the tool than other commonly used parser generators like yacc or bison.
4. ANTLR also supports tree parsers that will traverse a parse tree, such as one generated from a regular parser, and then perform actions on them. They are also represented by a grammar specification, similar to regular parsers.
We first use the OCL parser and lexical analyzer to parse OCL constraints and build a parse tree and build up a symbol table. Then the tree parser traverses the parse tree to actually translate the OCL constraints into AsmL.
Methodology for Translating OCL Constraints
We utilize a divide and conquer method to translate each OCL expression, whether it is a constraint, a pre-condition, a post-condition, or a 'let' statement defining a local variable or method used by constraints. With the help of the parser, each OCL expression is subdivided into many sub-expressions and translation occurs from the leaves of the parse tree going upwards. Because OCL is an evaluative language, each sub-expression has a result type and therefore it can be evaluated and the result of that evaluation is used in the containing expression, and so on. This means that the translation of an OCL constraint results in many sequential AsmL statements that stores results of the evaluations of intermediate sub-expressions before the larger containing expression is evaluated.
inv: self.isMarried implies self.age > 18
This constraint consists of an implies statement, which contains two operands that are sub-expressions. The dot operator in the first operand further subdivides that sub-expression into two more sub-expressions, which match the property call rule in the OCL grammar. The second operand in the implies expression is a greater-than expression that can be further divided into two sub-expressions, the second of which is a literal (the smallest possible expression). Taking all of these into account, a translation in pseudo-code would be:
temp1 = me
temp2 = temp1.isMarried
temp3 = me
temp4 = temp3.age
temp5 = new libOcl.OclInteger (18)
temp6 = temp4.isGreaterThan (temp5)
temp7 = temp2.implies (temp6)
As one can see, each sub-expression is evaluated one by one so that its result can be used by the containing expression until the entire OCL constraint (or method) is evaluated. The actual translation may be slightly different but this pseudo-code captures the basic concept in our translation schema. This translation schema is also unoptimized but optimization is left for future work.