FINAL PROGRAM - AIAA Houston Section



PROCEEDINGS

Of the

WORKSHOP ON AUTOMATION AND ROBOTICS (WAR 2009)

Organized by AIAA Houston Section Technical Committee on Automation and Robotics, IEEE Galveston Bay Section Joint Chapter

and IEEE Houston Section Computational Intelligence Society Chapter

And

INNOVATION 2009

Organized by

The Clear Lake Council of Technical Societies

In cooperation with CLCTS member organizations

For the JAIPCC BOARD

Robert Gilruth Center

Johnson Space Center, Houston, Texas

September 25th, 2009

WELCOME to Workshop on Automation and Robotics WAR 2009) and INNOVATION 2009!

This year's program continues the WAR / Innovation pattern of wide-ranging exploration of technologies, both topic-related (WAR) and panoramic (Innovation). This dual-edged approach facilitates communication among colleagues working in related areas, as well as cross-pollination between groups which could benefit from breakthroughs they might not ordinarily hear about. We are happy in 2009 to again receive substantive contributions from a technologically-diverse, savvy group of researchers, programmers and engineers

Every year, members of of the Clear Lake Council of Technical Societies join our efforts to provide this technical forum for us to share our vision , to report technical progress, and to exchange our innovative ideas toward advancing technology for a better future, especially a better technological future here at the space community in Clear Lake area.

To make this event beneficial to all the attendees , we will encourage all to engage in the discussion with the presenters. We are grateful to all the presenters who volunteered to contribute and share their knowledge with us. Being a multi-disciplinary, this event provides a unique opportunity for various areas of interest to interact and discover collaborative means of advancing the technologies.

Our appreciation goes to all the volunteers who contributed their time and effort in making this event successful!

Sincerely,

Paul Frenger, MD

A Working Hypothesis Inc

General Chair

Lewin Shih, Ph.D.

University of Houston Clear Lake

Co-Program Chair

Zafar Taqvi, Ph.D.

Barrios Technology Inc

Co-program Chair

PROGRAM

Workshop on Automation and Robotics (WAR) 2009

FRIDAY, September 25th, 2009

|8:15 –8:30 AM |Registration & Coffee |

|8:30 – 8:45 AM |Welcome: Dr. Zafar Taqvi (AIAA A&R TC Chair); Barrios Technology |

| |Satya Pilla AIAA V/C Technical Houston Section |

| |Session WAR-01 (Lone Star Room) |

| |Chair : Paul Frenger M.D., A Working Hypothesis Inc |

|8:45 – 9:10 AM |Topic: " Lunar Surface Emergency Path Planning using Particle Swarm Optimization " |

| |Presenter: Brian Birge, Ph.D, L3 |

|9:10 – 9:35 AM |Topic: " Robot Operating Systems " |

| |Presenter: Paul Frenger M.D., A Working Hypothesis Inc |

|9:35 – 10:00 AM |Topic: “Recurrent Neural Networks for Intelligent and Autonomous Robot Control” |

| |Presenter: Henry I. Ibekwe and Ali K. Kamrani, Ph.D, UH |

|10:00 – 10:15 AM |Break |

| |Session WAR-02 (Lone Star Room) |

| |Chair : Satya Pilla, Boeing |

|10:15 – 10:40 AM |Topic: “Robotic Innovation of Human Space Exploration” |

| | Presenters: Alex Monchak and Ki-Young Jeong , UHCL |

|10:40 – 11:10 AM |Topic: “The new Theory of Objects and the Automatic Generation of Intelligent Agents” |

| |Presenter: Sergio Pissanetzky, Research Scientist. |

|11:10 – 11:40 AM |Topic: “Temporal Logic in Robotics – The Temporal Engineering of Software” |

| |Presenter: Gordon Morrison |

LUNCHEON KEYNOTE

| |Luncheon (Lone Star Room) |

|11:40 AM |Welcome: Dr. Paul Frenger, A Working Hypothesis Inc |

| |Introduction of Speaker: Dr Liwen Shih, UHCL |

|12:00 – 12:55 PM |Keynote Presentation : "Global Brain Movement: An Asia Perspective". |

| |Presenter: Dr. Da Hsuan Feng, Senior Executive Vice President, National Cheng Kung University, |

| |Tainan, Taiwan |

INNOVATION 2009 PROGRAM

.

| |Session 1: Computation (Lone Star Room) |

| |Chair: Dr Liwen Shih, UHCL |

|1:00 – 1:25 PM |Topic: “Real-Time Widely Distributed Simulations” |

| |Presenter: Robert Phillips , L3 |

|1:25 – 1:50 PM |Topic: "Exploiting Parallelism in Space Radiation Analysis and Other Scientific Computations" |

| |Presenter: Frank Christiny and Liwen Shih, Ph.D. |

|1:50 – 2:15 PM |Topic: “ Exploiting Large On-Chip Memory Space Through Data Recomputation” |

| |Presenter: Hakduran Koc, Ph.D., UHCL |

|2:15-2:25 |Break |

.

| |Session 2: Biomedical Studies (Lone Star Room) |

| |Chair: Dr. Paul Frenger, A Working Hypothesis Inc |

|2:25 – 2:50 PM |Topic: "New Devices for Non-invasive Diagnosis and Staging of Prostate Cancer by ZInc Sensing" |

| |Presenter: Christopher J. Frederickson, Ph.D., Andro Diagnostics, Inc |

|2:50 – 3:15 PM |Topic: “An approach in MRI-guided robot-assisted localized molecular imaging” |

| |Presenters: Ahmet E. Sonmez and Nikolaos V. Tsekos, UH |

| |Session 3: General (Lone Star Room) |

| |Chair: Hakduran Koc, Ph.D., UHCL |

|3:15 – 3:40 PM |Topic; “Solar Event Data Survey for Space Weather Prediction” |

| |Presenters: Twinkle Agarwal, Lance Hoang, Dan Fry and Liwen Shih, Ph.D. , UHCL |

|3:40 – 4:05 PM |Topic: "Give NASA a Chance to be the Next Google".   |

| |Presenters: Chris Bronk, Ph.D., Tony Elam, and Tory Gattis |

|4:05 – 4:30 PM |Topic: "Games are Not Just for Fun" |

| |Presenter: Joseph Giarratano, Ph.D.; UHCL |

WAR-INNOVATION 2009 ORGANIZATION 

General chair Paul Frenger, MD, A Working Hypothesis Inc

Co-Program Chair Lewin Shih, Ph.D., University of Houston Clear Lake

Co-Program Chair Zafar Taqvi, Ph.D., Barrios Technology Inc

Facilities Andy Lindberg, Retired

Logistics Arland Actkinson, Consultant

Publication Norm Chaffee, ex-officio

JAIPCC BOARD

IEEE Galveston Bay section Don Cravey, Julian Morales

UHCL Lewin Shih Ph.D., Thomas Harman Ph.D.

ISA Arland Actkinson (Chair)

Executive Secretary Zafar Taqvi (non-voting)

Treasurer (Acting) Andy Lindberg (non-voting)

Clear Lake Council of Technical Societies

Chair Norman Chaffee

Vice Chair Ken Goodwin, Ph.D.

Secretary Zafar Taqvi, Ph.D.

Treasurer Julian Morales

Chair Emeritus Andy Lindberg

WAR Sessions

| |Session WAR-01 (Lone Star Room) |

| |Chair : Paul Frenger M.D., A Working Hypothesis Inc |

|8:45 – 9:10 AM |Topic: " Lunar Surface Emergency Path Planning using Particle Swarm Optimization " |

| |Presenter: Brian Birge, Ph.D, L3 |

|9:10 – 9:35 AM |Topic: " Robot Operating Systems " |

| |Presenter: Paul Frenger M.D., A Working Hypothesis Inc |

|9:35 – 10:00 AM |Topic: “Recurrent Neural Networks for Intelligent and Autonomous Robot Control” |

| |Presenter: Henry I. Ibekwe and Ali K. Kamrani, Ph.D, UH |

|10:00 – 10:15 AM |Break |

| |Session WAR-02 (Lone Star Room) |

| |Chair : Satya Pilla, Boeing |

|10:15 – 10:40 AM |Topic: “Robotic Innovation of Human Space Exploration” |

| | Presenters: Alex Monchak and Ki-Young Jeong , UHCL |

|10:40 – 11:10 AM |Topic: “The new Theory of Objects and the Automatic Generation of Intelligent Agents” |

| |Presenter: Sergio Pissanetzky, Research Scientist. |

|11:10 – 11:40 AM |Topic: “Temporal Logic in Robotics – The Temporal Engineering of Software” |

| |Presenter: Gordon Morrison |

Lunar Surface Emergency Path Planning using Particle Swarm Optimization

Brian Birge, Ph.D. /L3

Abstract

When dealing with manned exploration scenarios, the design priority is always placed on human safety. Proposed Lunar exploration tasks have the possibility of astronauts traveling several kilometers away from a homebase. It is not expected that they will follow a pre-planned path while exploring. There is a real possibility of an emergency situation of some kind, for example a broken arm, an equipment malfunction, notification of a gamma ray burst, etc. Because of this there is a need to develop a time optimal homebase return path, which may or may not be similar to the outbound path. Given a random lunar surface position, we wish to determine the path to another arbitrary surface point, subject to some constraints.

Constraints can be requiring avoidance of obstacles such as craters, boulders, steepness of incline. They can also be in the form of limiting fuel usage or time in full sunlight, etc. And of course, in an emergency situation, a primary constraint is distance to the destination.

A relatively recent method of evolutionary computation is Particle Swarm Optimization (PSO), originally inspired by efforts to model bird flocks. The algorithm works well with multi-optimization problems of high dimensionality and non-linearity. With some modifications it works with non-stationary error topologies as well. Continued refinement of PSO in the larger research community comes from attempts to understand human-human social interaction as well as analysis of emergent behavior.

A novel path planning algorithm has been developed, in coordination with PSO, that generates a piece-wise linear path from a set of optimal waypoints. The path is guaranteed to be continuous though the problem space itself may be discontinuous. The path avoids obstacles while minimizing total path distance.

A discussion of the general problem, a brief introduction to PSO, and some details on the associated cost function development are discussed. Additionally, a laptop PC based Matlab demonstration showing optimal path generation and obstacle avoidance will be shown. Also discussed will be continuing work in refining the algorithm, efforts at more environmental realism, and considerations when including a Rover control system, as well as the overall vision for project integration.

Robot Operating Systems

Paul Frenger M.D., Senior Member IEEE

A Working Hypothesis, Inc.

P.O. Box 820506, Houston, TX 77282

281-293-9484

pfrenger@alumni.rice.edu

Abstract

Like a computer’s operating system (OS), a robot’s OS provides a standardized application program interface (API) between the intrinsic hardware and the user’s programs for accessing inputs, outputs, memory, mass storage and peripherals. This access is by means of predefined system calls to the OS kernel via interrupts or subroutines. The robot OS may be provided with a simple CPU running a single program, or it may support multitasking, multiprocessing, networking and real time execution. The robot’s OS may utilize an underlying commercial computer OS (Linux, Apple Mac OS X, Windows, MS/DOS), a real-time operating system (RTOS), or directly control the system processor.

The “user” may be a human, or it may be an autonomous artificial intelligence agent program. Many advanced anthropoid robotic designs have been created in the last decade, including those by MIT staff (Cog, Kismet and Domo), Honda (Asimo), and Sony (QRIO). Despite their sophistication, these devices have no shared OS, which dooms ultimate long-term interoperability and code reuse [1]. Interest in a common robot OS is growing. The author has reviewed earlier attempts at a robotcapable OS, from KAREL (Pascal, 1981) to various Java implementations [2]. One interesting attempt was Tps (Tiny Postscript), developed with DARPA funding at the University of Colorado. Tps was a version of the Forth-like Postscript printer interpreter with the graphics processing code removed. It emphasized transportability and supported the ability to save the state of a running program, move it to another processor, then resume correctly. Tps ran under Unix / Solaris [3]. Open Firmware (OF, IEEE Standard 1275-1994) is a Forth-language based “plug and play” software product used on Apple, IBM, Sun and One Laptop per Child computers. It runs on the native computer hardware (which it tests and initializes), then boots the system OS [4]. All added hardware contains OF identifier data and software drivers in ROM which facilitates the plug and play function. The author used OF for the robot control system (RCS) of his ANNIE anthropoid robot (1999) which utilized a stackable Pentium PC/104 bus (IEEE P996.1) in a multiprocessor network. He also gave the RCS synthetic human-like emotions [5], then added an artificial intelligence capability to create a human nervous system function emulator [6]. The current version (Brain.Forth) can control simian as well as humanoid robots, and emulates the effects of hormones (epinephrine, oxytocin), drugs (opiates and marijuana), and various disorders (developmental delay, multiple sclerosis, fibromyalgia, anxiety, psychosis and Alzheimer’s disease). YARP (Yet Another Robot Platform) grew out of the Kismet design, and with revisions is being used by the RobotCub Consortium. It runs under Linux (Windows, OS X pending) and is based on C++, with future ports to Java, Python and C#. It was created to avoid dead-end robotic programming [7]. Microsoft released Robotics Developer Studio in 2007, based on a .NET library and supporting C#, Visual BASIC, JScript and IronPython [8]. It is mainly used for simulations and non-hardware robots.

A new Robot Operating System (ROS) is under development by teams at Stanford University, MIT and the Technical University of Munich [9-10]. Currently it runs on the Linux computer OS. Besides its OS services, ROS provides libraries for hardware abstraction, low-level device control, message passing between processes, and object management. The C++ and Python languages are supported. Caltech has recently developed a “Robust Real-Time Reconfigurable Robotics Software Architecture” (R4SA) for NASA, initially intended for mobile robot exploration of Mars. This package, written in C, establishes a real-time robotic computing environment in three layers: a driver layer, which rests on the system hardware; a device layer to abstract higher-level hardware dependencies and motion control operations; and an application layer which executes the robot’s main mission [11]. This robot OS can deal with wheels or arms and legs, as need be, as well as a variety of sensors. For remote control of robots, a graphics user interface (GUI) is included. The GUI can display variables (i.e.: on a simulated oscilloscope) and is certified for Mars Exploration Rover operations for ground support and flight systems. A technical support package is available for the software and its GUI [12], and they can be licensed for commercial use through Caltech.

In conclusion, although operating systems for robotics are well behind those available for personal computers, recent developments are rapidly closing this gap. The robot OS of the near future will facilitate hardware and code reuse, thus accelerating progress. By adding an artificial intelligence capability and synthetic emotions to the basic robot OS, as the author has done, the creation of truly intelligent autonomous robot agents will become possible much sooner than predicted [13].

References

1. Frenger, P., “Toward Horizontal Integration of Anthropoid Robotic Control Systems”, 3rd Intl Conf Computing, Communications & Control Technologies, July 24-27, 2005, Austin TX, pg.142-148.

2. Frenger, P., “Robot Control Techniques, Part One: A Review of Robotics Languages”, ACM Sigplan Notices, April 1997, pg.27-31.

3. Inquiries: former developer and custodian at the University of Colorado:

Dennis. Heimbigner@.

4. .

5. Frenger, P., “Linear Circuits for Neural Networks and Affective Computing”, Biomed Sci Instrum, 35, 1999, Copper Mtn CO, pg.247-252.

6. Frenger, P., “Human Nervous System Function Emulator”, Biomed Sci Instrum, 36, 2000, pg.289-294.

7. Fitzpatrick, P., Metta, G. and Natale, L., “Towards long-lived robot genes”, Robotics and Autonomous Systems, 56 (1), Jan 2008, pg.29-45.

8. .

9. Campbell, M., “Robots to get their own operating system”, New Scientist, Aug 8, 2009, pg. 18-19.

10. (Robot_Operating_System).

11. “Robust Software Architecture for Robots”, NASA Tech Briefs, July, 2009, pg. 58.

12. . Refer to NPO-41796 and NPO-41797.

13. Kurzweil, R., The Age of Spiritual Machines, Penguin Putnam, Inc., New York, 1999, pg.202-219

Recurrent Neural Networks for Intelligent and Autonomous Robot Control

Henry I. Ibekwe and Ali K. Kamrani, Ph.D.

Email: henry.ibekwe@mail.uh.edu & ali.kamrani@mail.uh.edu

Department of Industrial Engineering

University of Houston

Houston, Texas

Abstract

Intelligent decision-making and adaptation by autonomous agents in a complex, dynamic and unstructured world has been at the forefront of robotics research. The ability to design and develop autonomous agents—particularly robots—that perceive characteristics of such an environment in order to make high-level goal-based decisions are well known to be a difficult problem. This problem has also led to numerous inter-disciplinary studies to understand the underlying nature of intelligent behavior by autonomous agents. By intelligent behavior we mean the ability of a self-contained system to make decisions based on interaction with its environment so as to maximize a variety of performance measures. For most biological agents, the ultimate performance measure is survival and procreation in dynamic world, whereas for artificial agents such as robots, the performance measure will be designer specified based on the desired tasks and objectives for the robot.

The potential applications of autonomous intelligent robots across diverse industries are vast. The limiting factor has been designing reliable control architectures that are able to operate in complex environments such as the natural unstructured world humans inhabit. We attempt to address this problem by focusing on embedding artificial neural networks, particularly dynamic recurrent neural networks, in autonomous robots for intelligent control, learning and adaptive behavior. Neural network are networks of highly interconnected, massively parallel and distributed computational units possessing the ability to learn and adapt from changing inputs. Classical control architectures developed from the Artificial Intelligence (AI) discipline focused on hierarchical or deliberative control whereby the agent senses the environment, plans actions then acts on the environment. Attention later shifted to reactive control architectures with tight coupling of sensors and actuators when it was evident that hierarchical based controllers were unreliable as the complexity and uncertainty of the operating environment increased. However, reactive control architectures are incapable of high-level goal-based decision making and learning. We thus enlist recurrent neural network control architectures having the potential to solve the problems of operation in dynamic and complex environments while learning from its action to achieve its designed goals.

Robotic Innovation of Human Space Exploration

Alex Monchak , Dr. Ki-Young Jeong

University of Houston

Abstract

To provide economic value, actions match technology with demand. Customer wants from robotic human space exploration are derived from market research conducted without random sampling prior to this study in 2004, 2006 and 2006. The relative importance of customer wants are ranked by a focus group with individuals spending an imaginary one hundred dollars on the recorded wants. Actions required to meet twenty-six customer wants are collected. The result of the study to date is the customer wants matched with the planned functions. Customer requirements for multiplayer login, multiple robots and cell phones drive the technical requirements for interacting capability for ten million users, multiple robots for ten robots interacting and mobile controls with ten controls. A potentially innovative environment could result in a method patent. The need to extend life beyond Earth creates an opportunity pull for precursor robotic missions. Such missions could benefit from integration innovation of mobile phone, enterprise search and commercial launch.

Keywords: robotic, innovation, human space exploration

Acknowledgements:

Thank you to the engineering management faculty, staff and students.

The new Theory of Objects

and the Automatic Generation of Intelligent Agents

Sergio Pissanetzky, Research Scientist. Member, AAAI, IEEE.

Sergio@

ABSTRACT

The new mathematical Theory of Objects [1] is based on the Matrix Model of Computation, which, in its imperative form (iMMC), can perfectly represent any finite physical system. Transformations exist that convert any iMMC representation into a canonical form (cMMC) [1], where the system is represented by a single sparse canonical matrix of services C. A cMMC representation can also be directly constructed as a knowledge base, either from percepts or from natural instructions provided by a teacher [1, p.161]. A learning cMMC develops intelligence by automatically forming objects from what it already knows and using them to improve its ability to learn, a process known as scaffolding. The theory applies to any system, and establishes that the profile of C corresponds to the internal energy of the system and the stable attractors to the objects in the natural ontology of the system. The dynamics of the system is random and dissipative, and is provided by the Scope Constriction Algorithm (SCA). SCA minimizes the profile by refactoring the matrix and dissipating the energy, while preserving the canonicity of C and the behavior of the system. Under this dynamics, similar services coalesce into highly cohesive but well differentiated modules, the attractors, which represent the objects existing in the system, while the refactored matrix becomes the complete digital equivalent circuit of the system. The objects themselves are found in the attractors by the Object Recognition Algorithm (ORA). Objects are characterized by their stability and configuration independence. If an interaction between objects perturbs them into a disordered state, they will soon recover, perhaps in a different configuration. ORA uses precisely these features to recognize them.

This presentation emphasizes the ability of cMMC to learn, reason, and develop intelligence. The services in the cMMC are logical conjunctions of variables [1, p.159]. If the variables represent sentences in propositional logic [1, p.162], and with the addition of an inference algorithm that converts the sentences into conjunctive normal form, resolution can be applied for reasoning. Resolution is always sound and complete. Since the cMMC automatically forms objects that can learn, reason, and systematically improve their ability to learn and reason, then the cMMC is a thinking machine and the objects are intelligent agents. The type of agent depends on what has been learned. Domain-specific learning results in logic-based and circuit-based agents, method-specific learning results in bootstrap learning components specialized for that method of learning. An example is presented where several logic-based intelligent agents are automatically generated.

[1] “A New Universal Model of Computation and its Contribution to Learning, Intelligence, Parallelism,Ontologies, Refactoring, and the Sharing of Resources.” Sergio Pissanetzky. Int. J. ofComputational Intelligence, Vol. 5, No. 2, pp. 143-173 (August 2009).

Keywords: AGI, intelligent agents, bootstrap learning components, theory of objects, reasoning.

Temporal Logic in Robotics – The Temporal Engineering of Software

Gordon Morrison

Gordon.Morrison@

816-835-3071

Abstract

Robotics and control systems lend themselves to temporal engineering. Starting with an extended BNF provides an unambiguous definition for a complete control specification. The logic of BNF by definition flows in step with time and that leads to a temporal definition for truly engineered software. Like most demand event driven applications logic is the key to creating a reliable robotic system. Temporal logic presented in this paper provides a clear way of creating step-by-step rules to control and respond to events. Temporal logic reduces the complexity and provides a clear relationship to the specification. Tracing logic has always been a bane in engineering. With temporal logic trace is inherent in the architecture for every step. This paper shows the simulation of a multi-jointed robotic arm using temporal logic.

LUNCHEON KEYNOTE

| |Luncheon (Lone Star Room) |

|11:40 AM |Welcome: Dr. Paul Frenger, A Working Hypothesis Inc |

| |Introduction of Speaker: Dr Liwen Shih, UHCL |

|12:00 – 12:55 PM |Keynote Presentation : "Global Brain Movement: An Asia Perspective". |

| |Presenter: Dr. Da Hsuan Feng, Senior Executive Vice President, National Cheng Kung University, |

| |Tainan, Taiwan |

Global Brain Movement: An Asia Perspective

Dr. Da Hsuan Feng

Senior Executive Vice President

National Cheng Kung University

Tainan, Taiwan

Abstract

Asia Pacific in the 21st century is unquestionably one of the most exciting regions of the world. Hundreds of thousands of Asian students in the 20th century received scholarships to pursue advanced degrees in US universities. This trend is continuing even in the 21st century. While many remained in North America after they completed their studies, many did return to their native countries in Asia Pacific. In fact, the economic and intellectual miracle growth of Taiwan and South Korea in large part is due to the returning students’ contributions to their native lands in the latter part of the 20th century.

In recent years, several prominent and high caliber intellectuals have “returned” from North America to Asia Pacific region to become university administrators. This movement is globally and palpably showing the robustness of Asia Pacific higher education in the 21st century. This is also a serious “loss” to higher education for the United States. These individuals possess profound intellectual and administrative capabilities with deep understanding of Asia. Therefore if they were to hold equally important positions in North America, they could surely have contributed profoundly to the higher education landscape where there is strong call for deep and sustainable connection to Asia Pacific in this century.

Is it possible that with many of the newly founder intellectual powers in China, Taiwan, Hong Kong, and many others in the entire Asia Pacific, collaborating with traditional US intellectual powerhouses, it is conceivable that one could transform other less fortunate nations/regions, in the 21st century in a “Super Marshall Plan”? Dr. Feng will share his views on this and other issues from a unique perspective.

Speaker Bio

Da Hsuan Feng was born in New Delhi, India to a musician mother and journalist father. In the early 50’a, he moved to Singapore. After two years in civil engineering from Singapore Polytechnic, his interest shifted dramatically towards basic science. Dr. Feng received his physics BA and Ph.D. from Drew University (1968) and the University of Minnesota (1972) respectively. During his tenure at Drexel, he served two years as NSF Program Director of Theoretical Physics and visiting professor of Niels Bohr Institute and Daresbury Laboratory, respectively. Prior to joining the NCKU as the Senior Executive Vice President in 2007, Feng had been the Vice President for Research and Professor of Physics at the University of Texas at Dallas since 2000.

INNOVATION 2009 PROGRAM

.

| |Session 1: Computation (Lone Star Room) |

| |Chair: Dr Liwen Shih, UHCL |

|1:00 – 1:25 PM |Topic: “Real-Time Widely Distributed Simulations” |

| |Presenter: Robert Phillips , L3 |

|1:25 – 1:50 PM |Topic: "Exploiting Parallelism in Space Radiation Analysis and Other Scientific Computations" |

| |Presenter: Frank Christiny and Liwen Shih, Ph.D. |

|1:50 – 2:15 PM |Topic: “ Exploiting Large On-Chip Memory Space Through Data Recomputation” |

| |Presenter: Hakduran Koc, Ph.D., UHCL |

|2:15-2:25 |Break |

| |Session 2: Biomedical Studies (Lone Star Room) |

| |Chair: Dr. Paul Frenger, A Working Hypothesis Inc |

|2:25 – 2:50 PM |Topic: "New Devices for Non-invasive Diagnosis and Staging of Prostate Cancer by ZInc Sensing" |

| |Presenter: Christopher J. Frederickson, Ph.D., Andro Diagnostics, Inc |

|2:50 – 3:15 PM |Topic: “An approach in MRI-guided robot-assisted localized molecular imaging” |

| |Presenters: Ahmet E. Sonmez and Nikolaos V. Tsekos, UH |

| |Session 3: General (Lone Star Room) |

| |Chair: Hakduran Koc, Ph.D., UHCL |

|3:15 – 3:40 PM |Topic; “Solar Event Data Survey for Space Weather Prediction” |

| |Presenters: Twinkle Agarwal, Lance Hoang, Dan Fry and Liwen Shih, Ph.D. , UHCL |

|3:40 – 4:05 PM |Topic: "Give NASA a Chance to be the Next Google".   |

| |Presenters: Chris Bronk, Ph.D., Tony Elam, and Tory Gattis |

|4:05 – 4:30 PM |Topic: "Games are Not Just for Fun" |

| |Presenter: Joseph Giarratano, Ph.D.; UHCL |

Real-Time Widely Distributed Simulations

Robert Phillips/L-3

robert.g.phillips@

Abstract

This presentation will describe a number of issues and developed solutions for executing a widely distributed and loosely coupled simulation in real-time. Some of the key concepts that will be discussed include:

• Four possible strategies to compensate for data latency, with their benefits and costs.

• The use of data ownership transfer to handle critical sections of the simulation.

• Data extrapolation to compensation for slower data rates.

The presentation will also describe how data tends to fall into two categories: data that is very hard to predict but has a noticaable real world latency (such as telemetry and commanding), and data that is easier to predict but has little or no real world latency (such as “truth” data), and how these types of data present different challenges.

The presentation will also describe a methodology for determining if a given distributed simulation can be executed in real-time based upon network performance, data characteristics, and fidelity requirements.

Finally, the presentation will describe some current successful applications of these techniques and lessons learned, with a focus on distributed simulations used for joint training of Japanese Aerospace Exploration Agency (JAXA) and NASA flight controllers for HIIA Transfer Vehicle (HTV) / ISS proximity operations and capture.

Exploiting Parallelism in Space Radiation and other Scientific Computation

Frank Christiny1,2 and Liwen Shih1, Ph.D.

University of Houston - Clear Lake1

The Boeing Company2

shih@uhcl.edu

ABSTRACT

Due to the inductive nature of current numerical methods, scientific software programs characterize themselves for their heavy use of looping structures and recursive calls. To increase the performance of these programs, this behavior can be exploited by software designed to work simultaneously in many chunks of repetitive code or data, as in parallel computing. Applying simple, known parallelizing techniques to sequential programs can yield significant improvements in their speedup and throughput when executed in progressively more available parallel clusters of computers. Parallelization of current sequential programs can be done by utilizing both known, general optimizing techniques and specific ones facilitated by the software application under parallelization. Among the general techniques are:

1) Inlining small procedures and function calls that are heavily utilized, such as exponential and other transcendental functions for distributing them among the participating parallel processors. 2) Privatizing of local variables that can be made local in the distributed systems to save time by eliminating parameter transfer and context switching. 3) Unrolling of short loops which has long been used as a sequential optimizing technique; it can also be applied to parallel optimization with the same benefits, only multiplied by the number of processors. 4) Finding non-dependent inner loops to distribute the load, which can be very hard in proving independence while maintaining correctness. Among the specific techniques used are: 1) Parallelizing the most heavily called subprograms in order to impart the most gain. Naturally, this technique requires collecting performance metrics. 2) Reducing all input/output to the absolute minimum, by designating one or more nodes, depending on the number of processes, as the sole handlers of the data that the program uses/delivers. All these techniques have been applied to the analysis of HZETRN, a space radiation program written in Fortran77, which has been used for many years now by NASA scientists to simulate and study the potential radiation effects on human tissue. By their nature, these parallelizing techniques can be extended to any scientific program written in any modern sequential computer language.

Acknowledgement:

The authors appreciate the expert support of NASA Langley, TACC, TLC2, TeraGrid and ISSO grants.

Exploiting Large On-Chip Memory Space Through Data Recomputation

Hakduran Koc, Ph.D.

Assistant Professor of Computer Engineering

University of Houston - Clear Lake

Ph.: +1 (281) 283-3877

Fax: +1 (281) 283-3870

Url:

Abstract

This talk presents a novel on-chip memory space utilization strategy for architectures that accommodate large on-chip software-managed memories. In such architectures, the access latencies of data blocks are typically proportional to the distance between the processor and the requested data. Considering such an on-chip memory hierarchy, we propose to recompute the value of an on-chip data, which is far from the processor, using the closer data elements instead of directly accessing the far data if it is beneficial to do so in terms of performance.

This paper presents the details of a compiler algorithm that implements the proposed approach and reports the experimental data collected using six data-intensive applications programs. Our experimental evaluation indicates 8.2% performance improvement, on the average, over a state-of-the-art on-chip memory management strategy and shows consistent improvements for varying on-chip memory sizes and different data access latencies.

New Devices for Non-invasive Diagnosis and Staging

of Prostate Cancer by ZInc Sensing

Christopher J. Frederickson, Ph.D.

CSO/CTO, Andro Diagnostics, Inc.

101 14th Street,Galveston TX 77550

409-354-8998

cjfrederickson@

ABSTRACT

Every year an estimated 250,000 men die of prostate cancer, with about 28,000 dying in the USA, where prostate cancer is the number two cancer killer of men. The principal reason for this tragedy is simply that there is no accurate test for early prostate cancer, and the tests available (PSA + the digital rectal exam) are so inaccurate that 3/4s of American men do not even bother to get tested, and those who do are misdiagnosed (sent for unnecessary biopsies) 80% of the time.

We have turned to very old technology to create a new prostate cancer diagnostic. In 1952 it was first reported that prodigious zinc secretion of the prostate (~ 9-10 mM secreted into prostatic fluid) is reduced by up to 90% in early stage prostate cancer. Though re-discovered and replicated over 2-dozen times since then, this down regulation of a metabolomic biomarker (secreted zinc) in prostate cancer was never developed into a practical cancer screening test.

Our group is in pre-FDA clinical trials in the evaluation of the prostatic fluid zinc test as a cancer diagnostic. In the first analysis of 88 men scheduled for prostate biopsy, our test yielded 90% sensitivity and 94% specificity for predicting cancer-no cancer in the biopsy result (AUROC = 95%). We are presently testing another 300 men, in preparation to beginning our FDA PMA trial.

Our first products will be tests conducted in a reference laboratory (Product 1) or in the urology or family-practice office (Product 2). The zinc is measured fluorimetrically in the prostatic fluid after simple dilution of the massage-expressed prostatic fluid. The third product will be a colorimetric zinc “assay” that can be done at home (using ejaculate) in a few minutes, with a kit that would wholesale for ~ $20. The 5 year survival for men whose prostate cancer is detected and treated early (while organ confined) is 100%. Widespread use of our ejaculate home test can potentially prevent essentially all of the deaths from prostate cancer among those men who have the resources to obtain treatment after screening positive.

An approach in MRI-guided robot-assisted localized molecular imaging

Ahmet E. Sonmez and Nikolaos V. Tsekos

Medical Robotics Laboratory, Department of Computer Science

Phone: 713-743-3350 e-mail: ntsekos@cs.uh.edu

Abstract

Recent advances in understanding the molecular features of lesions in vivo and the rapid evolution of molecular imaging, such as optical coherence tomography, may offer new paradigms, such as the possibility for assessing the malignancy of a tumor in situ (e.g. [1]) . However, a challenge encountered with such modalities is the limited field of view. A possible solution is a trans-needle approach for placing the molecular imaging probe inside a lesion using (1) a traditional modality, such as MRI, for real-time guidance and (2) a robotic system for maneuvers the needle-probe [pic][2-4]. We present a prototype system for MRI-guided localized molecular imaging. In particular, this portion of the work is focused on the implementation of an image-based closed loop control of the needle-probe for scanning the tissue.

Figure 1 illustrates the needle-probe constructed of ABS with a 3D printer, and actuated with piezo actuated Squiggle® motors to be MRI compatible. This probe has two linear degrees of freedom: one for gross and one for fine scanning. MR images are used to define the limits of the areas of scanning relative to MR scanner coordinate systems. The position of the scanning element is identified by home-made “light-only” optical encoders (Fig. 2). Those quadrature encoders use an encoder strip that interrupts the light emitted by an LED and detected by two photodetectors. The LED and detectors are located over 6 meters away from the needle-probe using optical fibers. A TTL square wave is generated corresponding to the light interruptions, with two lagging square waves corresponding to the two phototransistors: measuring the number and lag of the pulses we calculate direction and distance. This sensor enables closed loop control for MR compatible manipulators without the limitations due to the low resolution of the MR images (that are used only for planning the scanning areas)

Keywords:

Intraoperative MRI, biopsy needle, encoder, medical robotic device

References:

[1] D. J. Yang, E. E. Kim, and T. Inoue, "Targeted molecular imaging in oncology," Ann Nucl Med, vol. 20, pp. 1-11, Jan 2006.

[2] N. V. Tsekos, A. Khanicheh, E. Christoforou, and C. Mavroidis, "Magnetic resonance-compatible robotic and mechatronics systems for image-guided interventions and rehabilitation: a review study," Annu Rev Biomed Eng, vol. 9, pp. 351-87, 2007.

[3] E. Christoforou, E. Akbudak, A. Ozcan, M. Karanikolas, and N. V. Tsekos, "Performance of interventions with manipulator-driven real-time MR guidance: implementation and initial in vitro tests," Magn Reson Imaging, vol. 25, pp. 69-77, Jan 2007.

[4] N. V. Tsekos, A. Ozcan, and E. Christoforou, "A Prototype Manipulator for MR-guided Interventions Inside Standard Cylindrical MRI Scanners," J Biomech Eng, vol. 127, pp. 972-980, 2005.

Solar Event Data Survey for Space Weather Prediction

Twinkle Agarwal1, Lance Hoang1,2, Dan Fry1,2, Ph.D. and Liwen Shih1, Ph.D.

University of Houston - Clear Lake1

Lockheed Martin Corporation 2

shih@uhcl.edu

ABSTRACT

The ultimate goal of the project is to predict space weather in order to protect astronauts and equipments from the hazardous space radiations. The earth’s magnetic field deflects the majority of space radiation away from earth. The earth’s atmosphere also protects human beings from sun’s radiation. Clearly, for human to be able to live and work in outer space for an extended period of time without earth’s magnetic field and atmosphere, better radiation protection and more reliable prediction systems are needed for the safety of the mission crew. In an effort to provide a safe working environment in space, NASA has deployed various satellites to capture data and analyze the solar wind activities. Among the well known satellites are the GOES (Geostationary Operational Environmental Satellite Program), SOHO (Solar & Heliospheric Observatory), ACE (Advanced Composition Explorer), and STEREO (Solar TErrestrial RElations Observatory) satellites. These satellite instruments provide important information about the sun activities and the potential effect on human working in space. Ideally, there should be an automated solar event monitoring system that can efficiently analyze data from each satellite to quickly and accurately predict solar events to enhance safety of space crew. However, space weather prediction is still at infancy. Mainly, there may not be enough data to perform any significant statistics. In addition, there were technical challenges correlating different data sources. Moreover, limited amount of data may provide a false indication of the solar event. The validity and accuracy of data collected are also questioned. All of the concerns above make space weather prediction a great challenge. As a result, it is currently difficult to accurately predict solar event activities and its effect in space. However, with our initial data survey using combination of various data sources, an intelligent, computerized data visualization and monitoring system may be developed to help speed up solar event response and eventually determine the prerequisites and the feasibility of space weather prediction in near future.

Acknowledgement:

The authors appreciate the expert support of Yuan-Kuen Ko of Naval Research Lab and ISSO grants.

Give NASA a Chance to be the Next Google

Chris Bronk, Ph.D., Baker Institute for Public Policy at Rice University, Fellow ‐ Technology, Society & Public Policy, rcbronk@rice.edu

Tony Elam, Principal, Elam Consulting, Anthony.J.Elam@

Tory Gattis, OpenTeams LLC, Founder, President, and Social Systems Architect, tgattis@

Abstract

As it celebrates the 40th anniversary of the Apollo moon landings, NASA may be facing its greatest challenge in history. The Review of U.S. Human Space Flight Plans Committee (aka the Augustine Commission) has recognized that severe budget and safety constraints threaten NASA’s mission over the coming decades. A radical organizational breakthrough is needed. There are well‐documented problems with the existing bureaucracy, and heavy reliance on private contractor outsourcing has not been a panacea. To succeed, NASA will need an organization that can enable something like a “Moore’s Law of Space Travel” — yielding continuous reductions in the costs and risks of space travel similar to the rapid improvements we’ve seen in computer technology. At the same time, the Obama administration wants to pioneer “Government 2.0” based on modern “Web 2.0” collaboration technologies to improve both efficiency and effectiveness. It wants government to be more agile, innovative and entrepreneurial, and has hired federal information and technology officers to make this happen. What the administration needs is an agency to create a prototype of these new approaches — a “Google of Government” able to transplant the Silicon Valley entrepreneurial ecosystem inside its organization to yield a continuous stream of innovations. Who better than NASA to pioneer this approach?

In our session, we wish to address these questions and others:

? What are the bureaucratic impediments to innovation within NASA and the contractor community?

? Can the Capability Maturity Model (CMM) be made more lean/agile?

? How could the “open source” approach to large scale development be applied at NASA? Is there a role for Linus’ Law ‐“Given enough eyeballs, all bugs are shallow” – to mitigate the extreme consequences of potential “bugs” in the Moon‐Mars mission model? Could it also be an effective way to work with international, academic and private sector partners, as well as to build public engagement?

? Is it possible to develop a “Moore’s Law of Space Travel” — yielding continuous reductions in the costs and risks of space travel similar to the rapid improvements we’ve seen in computer technology

o Reduce costs, timelines, and risk

o Improve safety and capabilities

o Increase science and technology spin‐offs

o Create real options, flexibility

? Could innovative new collaboration software, delivered as a web community, radically improve NASA’s ability to innovate cost effectively?

Games are Not Just for Fun

Joseph Giarratano, Ph.D.

UHCL

giarratano@uhcl.edu

ABSTRACT

For several years I have been teaching a course on the theory and programming of games. The course theme is applying a Fourth Generation game engine and language called DarkBasic to designing and developing 2D and 3D real-time multimedia simulations and games. Many studies have shown that students learn much better when using interactive methods such as games rather than sitting through a conventional three hour lecture. This interactive mode has sometimes been called game programming but it is much more.

Very interactive games with mechanical results have been used for years in flight training and the military is now one of the major users, particularly with the new electronic battlefield where all units of air, ground and sea are modeled in an integrated environment. This allows much better command and control than ever before in the past. Also with new robotic and autonomous weapon and support systems being put into use, soldiers have much less risk and accomplish more.

This talk will discuss some of the major applications of games outside of the entertainment industry. when engaging in for education, training, robotics, and entertainment. Simulations and games are extensively used for training of defense, police, firefighters, industry and manufacturing workers as a way to save money and provide a richer training dataset than can be provided in the real world. There are many other applications today which use real-time programming and games in serious, multimedia, real-time applications. Another topic to be discussed will be the economics of running a major game establishment and making game arcades available for student recreation and research

It is common for pilots, police and other people engaged in hazardous jobs to first engage in computer training exercises before live training. The military uses this extensively as an economical and safe way of conducting classic war games and terrorist scenarios. Industry and manufacturing offer computer training first before exposing students to potentially dangerous situations. The entertainment game market now surpasses the movies and music industry sales of over $20 billion dollars a year, and the government defense expenditures greatly exceeds that. Homeland Security uses game simulations to train and evaluate terrorist threats.

Major universities such as Stanford, MIT, SMU and Wisconsin already teach simulation and game courses, and some offer 4-year degrees in this field. Students can significantly enhance their resumes by listing experience with 3D programming and Artificial Intelligence. Many jobs are available in the computer game industry that are standard types and do not involve game programming. These offer further opportunities for conventional computer carets that may lead into game creation.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download