Vol 1 No.2

Vol 1 No.2

Papers

  1. ABSTRACT It is beneficial to apply functional refinement in Object-Oriented (OO) software development. This paper proposes a method that enhances our earlier method to complement existing OO methods for realizing uses-cases through functional refinement. With the use of an augmented data flow diagram (DFD), called DFD+, the proposed method bridges functional refinement and OO decomposition systematically and precisely. In the requirements analysis stage, the method realizes use-cases through functional refinement and specifies them in DFD+s. In the design and implementation stages, it transforms the DFD+s systematically and precisely into OO design and implementation. It is seamless to realize more complex use-cases using the proposed method and the remaining using any existing OO methods. The method is amenable to automation and a prototype has been developed to support the transformation process. We have also validated the method through case studies.

  2. ABSTRACT This paper presents Software Architecture Risk Assessment (SARA) Tool to demonstrate the process of risk assessment at the software architecture level. The prototype tool accepts different types of inputs that define software architecture. It parses these input files and produces quantitative metrics that are used to estimate the required risk factors. The final result of this process is to discern the potentially high risk components in the software system. By manipulating the data acquired from domain expert and measures obtained from Unified Modeling Language (UML) artifacts, SARA Tool can be used at the architecture development phase, at the design phase, or at the implementation phase of the software development process to improve the quality of the software product.

  3. ABSTRACT Meeting stakeholders' requirements and expectations becomes one of the critical aspects on which any software organization in market-driven environment focuses on, and pays a lot of effort and expenses to maximize the satisfaction of their stakeholders. Therefore identifying the software product release contents becomes one of the critical decisions for software product success. Requirements prioritization refers to that activity through which product releases contents that maximize stakeholder satisfaction can be identified [8]. This paper illustrates the Value-Oriented requirement prioritization approach for software product management. The technique proposed in this paper is based on the Hierarchical Cumulative Voting (HCV) and Value-Oriented Prioritization (VOP) techniques. The proposed technique, Value-Oriented HCV (VOHCV) addresses the weakness of HCV through selecting the best candidate requirements for each release not only based on the stakeholder's perceived value as HCV but also in terms of associated anticipated cost, technical risk, relative impact and market-related aspects. The VOHCV also addresses the weakness of VOP through supporting not only requirements flat structure as VOP but also through supporting hierarchical structure. By this means VOHCV inherits the strengths of both VOP and HCV and addresses their weaknesses while selecting the best candidate release requirement, to maximize stakeholders' value and satisfaction [11].

  4. ABSTRACT We make use of a well known data structure consisting of two linear arrays" to represent the Component Interaction Graph (GIG) and have experimented with some possible CiGs for a Component Based Software to show the quan-titative characteristics of the dependencies and understand the ways in which these dependencies can be managed/ minimized. We have developed a tool 'CIGIET' for this purpose. The understanding of interconnections of compo-nents is also desirable for the maintenance purpose, Based on the observa-tions we suggest some guidelines for designing a CBS for functionality along with maintainability. This work attempts to provide an initial background for meaningful studies related to the concept of 'Design for Maintainability'.

  5. ABSTRACT The accuracy of the learned classification rules in data mining is affected by the used learning algorithm and the availability of the whole training set in main memory during the learning process. In this paper, we propose a combination of data reduction techniques based on attributes relevancy, data abstraction, and data generalization. We also propose a hybrid classification algorithm based on decision tree and genetic algorithm. Decision tree as a greedy algorithm is to handle generalization, where each learned rule is covered by large number of examples in the training set "large-scope rules". The genetic algorithm handles specialization in the training set, where a small number of examples cover each of the learned rules "small-scope rules".