بایگانی برچسب برای: architecture

Towards a Cognitive TCPIP Network Architecture[taliem.ir]

Towards a Cognitive TCP/IP Network Architecture

The principal aim of cognitive networking is to equip traditional networks with some sort of intelligence, in order to make them evolve and achieve higher levels of performance than those that can currently be achieved . Typical characteristics of cognitive networks are the ability to monitor the environment they are deployed in, to take reasoned actions based on current conditions towards an end-to-end objective, and to learn from past experience. Such enhanced networks will likely be characterized by a nonnegligible complexity, which can be tolerated if accompanied by relevant benefits, such as performance increase or mitigation of management burden. Different cognitive proposals have been proposed thus far in the literature, ranging from general framework definitions , to cognitive node architectures , to specific implementations . This contribution aims to illustrate the concepts we advanced in previous works , in which we described how the cognitive networking paradigm can fill the gap between service-oriented architectures and network, and extend them, by enabling reasoning with external information, i.e. information that is not locally available and is sensed elsewhere in the network. This enables the cognitive process to achieve a global vision of the network, which should facilitate the achievement of better end-to-end performance.
Power Efficient Processor Architecture and The Cell Processor[taliem.ir]

Power Efficient Processor Architecture and The Cell Processor

This paper provides a background and rationale for some of the architecture and design decisions in the Cell processor, a processor optimized for compute-intensive and broadband rich media applications, jointly developed by Sony Group, Toshiba, and IBM.
HISC A computer architecture using operand descriptor[taliem.ir]

HISC: A computer architecture using operand descriptor

Computing has been evolved from number crunching to today’s cloud. Data are no longer numbers but information which needs to be appropriately guarded and easily transportable, but the original von Neumann instruction model does not support them architecturally. This led us to start a new architecture named HISC (High-level Instruction Set Computer), to attach attributes to individual operand on instruction for effective and efficient processing of today’s computing. HISC instruction consists of an operation code (opcode), and an index to source or destination operand referenced by an operand descriptor, which contains value and attributes for the operand. The value and attributes can be accessed and processed in parallel with execution stages, introducing zero or low clock cycle overheads. Object-oriented programming (OOP) requires strict access control for the data. The JAVA model, jHISC, executes Java object-oriented program not only faster than software JVMs but has less cycles-per-instruction than other hardware Java processors. We also propose future extensions for operand descriptor beyond OOP.
An examination of the relation between architecture and compiler[taliem.ir]

An examination of the relation between architecture and compiler

The interactions between the design of a computer's instruction set and the design of compilers that generate code for that computer have serious implications for overall computational cost and efficiency. This article, which investigates those interactions, should ideally be based on comprehensive data; unfortunately, there is a paucity of such information. And while there is data on the use of instruction sets, the relation of this data to compiler design is lacking. This is, therefore, a frankly personal statement, but one which is based on extensive experience. My colleagues and I are in the midst ofa research effort aimed at automating the construction of productionquality compilers. (To limit the scope of what is already an ambitious project, we have considered only algebraic languages and conventional computers.) In brief, unlike many compiler- compiler efforts of the past, ours involves automatically generating all of the phases of a compiler-including the optimization and code generation phases found in optimizing compilers. The only input to this generation process is a formal definition of the source language and target computer. The formulation of compilation algorithms that, with suitable parameters, are effective across a broad class of computer architectures has been fundamental to this research. In turn, finding these algorithms has led us to critically examine many architectures and the problems they pose. Much of the opinion that follows is based on our experiences in trying to do this, with notes on the difficulties we encountered.
Cloud Robotics[taliem.ir]

Cloud Robotics: Architecture, Challenges and Applications

We extend the computation and information sharing capabilities of networked robotics by proposing a cloud robotic architecture. The cloud robotic architecture leverages the combination of an ad-hoc cloud formed by machine-to-machine (M2M) communications among participating robots, and an infrastructure cloud enabled by machine-to-cloud (M2C) communications. Cloud robotics utilizes an elastic computing model, in which resources are dynamically allocated from a shared resource pool in the ubiquitous cloud, to support task offloading and information sharing in robotic applications. We propose and evaluate communication protocols, and several elastic computing models to handle different applications. We discuss the technical challenges in computation, communications and security, and illustrate the potential benefits of cloud robotics in different applications.
CogNet A Network Management Architecture[taliem.ir]

CogNet: A Network Management Architecture Featuring Cognitive Capabilities

It is expected that the fifth generation mobile networks (5G) will support both human-to-human and machine-tomachine communications, connecting up to trillions of devices and reaching formidable levels of complexity and traffic volume. This brings a new set of challenges for managing the network due to the diversity and the sheer size of the network. It will be necessary for the network to largely manage itself and deal with organisation, configuration, security, and optimisation issues. This paper proposes an architecture of an autonomic selfmanaging network based on Network Function Virtualization, which is capable of achieving or balancing objectives such as high QoS, low energy usage and operational efficiency. The main novelty of the architecture is the Cognitive Smart Engine introduced to enable Machine Learning, particularly (near) realtime learning, in order to dynamically adapt resources to the immediate requirements of the virtual network functions, while minimizing performance degradations to fulfill SLA requirements. This architecture is built within the CogNet European Horizon 2020 project, which refers to Cognitive Networks.
Development of Brake System and Regenerative[taliem.ir]

Optimal geometric design of monolithic thin-film solar modules: Architecture of polymer solar cells

In this study the geometrical optimization of monolithically integrated solar cells into serially connected solar modules is reported. Based on the experimental determination of electrodes0 sheet and intermittent contact resistances, the overall series resistance of individual solar cells andinterconnected solar modules is calculated. Taking a constant photocurrent generation density into account, the total Joule respectively resistive power losses are determined by a self-consistent simulation according to the 1-diode model. This method allows optimization of the solar module geometry depending on the material system applied. As an example, polymer solar modules based on ITO-electrodes and ITO-free electrodes were optimized with respect to structuring dimensions.
An expert system hybrid architecture to support experiment[taliem.ir]

An expert system hybrid architecture to support experiment management

Specific expert systems are used for supporting, speeding-up and adding precision to in silico experimentation in many domains. In particular, many experimentalists exhibit a growing interest in workflow management systems for making a pipeline of experiments. Unfortunately, these type of systems does not integrate a systematic approach or a support component for the workflow composition/reuse. For this reason, in this paper we propose a knowledge-based hybrid architecture for designing expert systems that are able to support experiment management. This architecture defines a reference cognitive space and a proper ontology that describe the state of a problem by means of three different perspectives at the same time: procedural, declarative and workflow-oriented. In addition, we introduce an instance of our architecture, in order to demonstrate the features of the proposed work. In particular, we model a bioinformatics case study, according to the proposed hybrid architecture guidelines, in order to explain how to design and integrate required knowledge into an interactive system for composition and running of scientific workflows.