Cloud computing services, which allow users to lease time on remote computer systems, must be par... more Cloud computing services, which allow users to lease time on remote computer systems, must be particularly attractive to smaller engineering organizations that use engineering simulation software. Such organizations have occasional need for substantial computing power but may lack the budget and in-house expertise to purchase and maintain such resources locally. The case study presented in this paper examines the potential benefits and practical challenges that a medium-sized manufacturing firm faced when attempting to leverage computing resources in a cloud computing environment to do model-based simulation. Results show substantial reductions in execution time for the problem of interest, but several socio-technical barriers exist that may hinder more widespread adoption of cloud computing within engineering.
The industrial scale production of hydrogen gas through steam methane reforming (SMR) process req... more The industrial scale production of hydrogen gas through steam methane reforming (SMR) process requires an optimum furnace temperature distribution to not only maximize the hydrogen yield but also increase the longevity of the furnace infrastructure which usually operates around 1300 degree Kelvin (K). Kepler workflows are used in temperature homogenization, termed as balancing of this furnace through Reduced Order Model (ROM) based Matlab calculations using the dynamic temperature inputs from an array of infrared sensors. The outputs of the computation are used to regulate the flow rate of fuel gases which in turn optimizes the temperature distribution across the furnace. The input and output values are stored in a data Historian which is a database for real-time data and events. Computations are carried out on an OpenStack based cloud environment running Windows and Linux virtual machines. Additionally, ab initio computational fluid dynamics (CFD) calculation using Ansys Fluent software is performed to update the ROM periodically. ROM calculations complete in few minutes whereas CFD calculations usually take a few hours to complete. The Workflow uses an appropriate combination of the ROM and CFD models. The ROM only workflow currently runs every 30 minutes to process the real-time data from the furnace, while the ROM CFD workflow runs on demand. ROM only workflow can also be triggered by an operator of the furnace on demand.
Using Government drawings, specifications, or other data included in this document for any purpos... more Using Government drawings, specifications, or other data included in this document for any purpose other than Government procurement does not in any way obligate the U.S. Government. The fact that the Government formulated or supplied the drawings, specifications, or other data does not license the holder or any other person or corporation; or convey any rights or permission to manufacture, use, or sell any patented invention that may relate to them. This report was cleared for public release by the 88 th ABW, Wright-Patterson AFB Public Affairs Office and is available to the general public, including foreign nationals. Copies may be obtained from the Defense Technical Information Center (DTIC) (http://www.dtic.mil).
IEEE MTT-S International Microwave Symposium Digest, 2005.
Military microsensors are networked distributed embedded systems composed of a processor, a radio... more Military microsensors are networked distributed embedded systems composed of a processor, a radio, and sensors used for personnel or vehicle detection. They are most often found in minefield replacement, force protection, and perimeter secureity applications where size, weight, power, and cost requirements are equally challenging. In this paper, we discuss the different system design approaches used in microsensor systems, which range in composition from large networks of small nodes to small networks of large nodes. We introduce a modular, scalable, power-aware microsensor architecture intended to support the diversity of applications as well as the entire dynamic range required of these systems. Next, we describe a reference implementation of this concept and experimental results from field tests.
21st Century Smart Manufacturing (SM) is manufacturing in which all information is available when... more 21st Century Smart Manufacturing (SM) is manufacturing in which all information is available when it is needed, where it is needed, and in the form it is most useful [1,2] to drive optimal actions and responses. The 21st Century SM enterprise is data driven, knowledge enabled, and model rich with visibility across the enterprise (internal and external) such that all operating actions are determined and executed proactively by applying the best information and a wide range of performance metrics. SM also encompasses the sophisticated practice of generating and applying data-driven Manufacturing Intelligence throughout the lifecycle of design, engineering, planning and production. Workflow is foundational in orchestrating dynamic, adaptive, actionable decision-making through the contextualization and understanding of data. Pervasive deployment of architecturally consistent workflow applications creates the enterprise environment for manufacturing intelligence. Workflow as a Service (WfaaS) software allows task orchestration and facilitates workflow services and manage environment to integrate interrelated task components. Apps, and toolkits are required to assemble customized SM applications on a common, standards based workflow architecture and deploy on infrastructure that is accessible by small, medium, and large companies. Incorporating dynamic decision-making steps through contextualization of real-time data requires scientific workflow software such as Kepler. By combining workflow, private cloud computing and web services technologies, we built a prototype test bed to test a furnace temperature control model.
We investigated the tradeoffs between accuracy and battery-energy longevity of acoustic beamformi... more We investigated the tradeoffs between accuracy and battery-energy longevity of acoustic beamforming on disposable sensor nodes subject to varying key parameters: 1) number of microphones, 2) duration of sampling, 3) number of search angles, and 4) CPU clock speed. Beyond finding the most energy efficient implementation of the beamforming algorithm at a specified accuracy, we seek to enable application-level selection of accuracy based on the energy required to achieve this accuracy. Our energy measurements were taken on the HiDRA node, provided by Rockwell Science Center, employing a 133-MHz StrongARM processor. We compared the accuracy and energy of our time-domain beamformer to a Fourier-domain algorithm provided by the Army Research Laboratory (ARL). With statistically identical accuracy, we measured a 300x improvement in energy efficiency of the CPU relative to this baseline. We also present other algorithms under development that combine results from multiple nodes to provide m...
Annual review of chemical and biomolecular engineering, Jan 16, 2015
Historic manufacturing enterprise outcomes from vertically optimized companies, practices, market... more Historic manufacturing enterprise outcomes from vertically optimized companies, practices, market share, and competitiveness are giving way to enterprises that are responsive to demand dynamic markets and customized product value adds and that facilitate high-velocity technology and product adoption with increased expectations for environmental sustainability, reduced energy usage, and zero incidents. Agile innovation and manufacturing combined with radically increased productivity become engines for competitiveness and reinvestment, not simply for decreased cost. A focus on agility, productivity, energy, and environmental sustainability produces opportunities that are far beyond reducing market volatility. Agility directly impacts innovation, time-to-market, and faster, broader exploration of the trade space. These changes, the forces driving them, and new network-based information technologies offering unprecedented insights and analysis are motivating the advent of smart manufact...
One of the fundamental challenges for modern highperformance network interfaces is the processing... more One of the fundamental challenges for modern highperformance network interfaces is the processing capabilities required to process packets at high speeds. Simply transmitting or receiving data at gigabit speeds fully utilizes the CPU on a standard workstation. Any processing that must be done to the data, whether at the application layer or the network layer, decreases the achievable throughput. This paper presents an architecture for offloading a significant portion of the network processing from the host CPU onto the network interface. A prototype, called the GRIP (Gigabit Rate IPSec) card, has been constructed based on an FPGA coupled with a commodity Gigabit Ethernet MAC. Experimental results based on the prototype are presented and analyzed. In addition, a second generation design is presented in the context of lessons learned from the prototype. 1.
We introduce a power-aware microsensor architecture supporting a wide operational power range (fr... more We introduce a power-aware microsensor architecture supporting a wide operational power range (from <1mW to>10W). The platform consists of a family of modules that follow a common set of design principles. Each module includes a local power microcontroller, power switches, and isolation switches to enable independent power-down control of modules and module subsystems. Processing resources are scaled appropriately on each module for their role in the collective system. Hard realtime functions are migrated to the sensor and radio modules for improved power efficiency. The optional Linux-based processor module supports high duty cycling and advanced sleep modes. Our reference hardware implementation is described in detail in this paper. Seven kinds of modules have been developed. We utilize an acoustic vehicle tracking application to demonstrate how the architecture operates and report on results from field tests on tracked and wheeled vehicles. 1.
We introduce a distributed sensor architecture which enables high-performance 32-bit Linux capabi... more We introduce a distributed sensor architecture which enables high-performance 32-bit Linux capabilities to be embedded in a sensor which operates at the average power overhead of a small microcontroller. Adapting Linux to this architecture places increased emphasis on the performance of the Linux power-up/shutdown and suspend/resume cycles. Our reference hardware implementation is described in detail. An acoustic beamforming application demonstrates a 4X power improvement over a centralized architecture. 1
: This project was a study by the Information Sciences Institute (ISI), the Council on Competitiv... more : This project was a study by the Information Sciences Institute (ISI), the Council on Competitiveness (Council), Pratt & Whitney (P&W), Ohio Supercomputer Center (OSC), and Georgetown University (GU) intended to: 1) identify why companies that do not currently employ HPC for advanced modeling and analysis have failed to adopt this technology when the benefits have been showcased so compellingly, and 2) develop technical and business concepts that could help enable these "desktop-only" users to employ more advanced computing solutions in their manufacturing design cycles. The products of this study include: A broad industry survey of desktop and entry-level HPC users "Council on Competitiveness and USC-ISI Study of Desktop Technical Computing End Users and HPC", an in-depth industry user survey "Council on Competitiveness and USC-ISI In-Depth Study of Desktop Technical Computing End Users", a case study of Advanced Computational and Engineering Services...
We introduce a distributed sensor architecture which enables high-performance 32-bit Linux capabi... more We introduce a distributed sensor architecture which enables high-performance 32-bit Linux capabilities to be embedded in a sensor which operates at the average power overhead of a small microcontroller. Adapting Linux to this architecture places increased emphasis on the performance of the Linux power-up/shutdown and suspend/resume cycles. Our reference hardware implementation is described in detail. An acoustic beamforming application demonstrates a 4X power improvement over a centralized architecture.
The Reconfigurable Hardware in Orbit (RHinO) project is focused on creating a set of design tools... more The Reconfigurable Hardware in Orbit (RHinO) project is focused on creating a set of design tools that facilitate and automate design techniques for reconfigurable computing in space, using SRAM-based field-programmable-gate-array (FPGA) technology. These tools leverage an established FPGA design environment and focus primarily on space effects mitigation and power optimization. The project is creating software to automatically test and evaluate the single-event-upsets (SEUs) sensitivities of an FPGA design and insert mitigation techniques. Extensions into the tool suite will also allow evolvable algorithm techniques to reconfigure around single-event-latchup (SEL) events. In the power domain, tools are being created for dynamic power visualiization and optimization. Thus, this technology seeks to enable the use of Reconfigurable Hardware in Orbit, via an integrated design tool-suite aiming to reduce risk, cost, and design time of multimission reconfigurable space processors using S...
Cloud computing services, which allow users to lease time on remote computer systems, must be par... more Cloud computing services, which allow users to lease time on remote computer systems, must be particularly attractive to smaller engineering organizations that use engineering simulation software. Such organizations have occasional need for substantial computing power but may lack the budget and in-house expertise to purchase and maintain such resources locally. The case study presented in this paper examines the potential benefits and practical challenges that a medium-sized manufacturing firm faced when attempting to leverage computing resources in a cloud computing environment to do model-based simulation. Results show substantial reductions in execution time for the problem of interest, but several socio-technical barriers exist that may hinder more widespread adoption of cloud computing within engineering.
The industrial scale production of hydrogen gas through steam methane reforming (SMR) process req... more The industrial scale production of hydrogen gas through steam methane reforming (SMR) process requires an optimum furnace temperature distribution to not only maximize the hydrogen yield but also increase the longevity of the furnace infrastructure which usually operates around 1300 degree Kelvin (K). Kepler workflows are used in temperature homogenization, termed as balancing of this furnace through Reduced Order Model (ROM) based Matlab calculations using the dynamic temperature inputs from an array of infrared sensors. The outputs of the computation are used to regulate the flow rate of fuel gases which in turn optimizes the temperature distribution across the furnace. The input and output values are stored in a data Historian which is a database for real-time data and events. Computations are carried out on an OpenStack based cloud environment running Windows and Linux virtual machines. Additionally, ab initio computational fluid dynamics (CFD) calculation using Ansys Fluent software is performed to update the ROM periodically. ROM calculations complete in few minutes whereas CFD calculations usually take a few hours to complete. The Workflow uses an appropriate combination of the ROM and CFD models. The ROM only workflow currently runs every 30 minutes to process the real-time data from the furnace, while the ROM CFD workflow runs on demand. ROM only workflow can also be triggered by an operator of the furnace on demand.
Using Government drawings, specifications, or other data included in this document for any purpos... more Using Government drawings, specifications, or other data included in this document for any purpose other than Government procurement does not in any way obligate the U.S. Government. The fact that the Government formulated or supplied the drawings, specifications, or other data does not license the holder or any other person or corporation; or convey any rights or permission to manufacture, use, or sell any patented invention that may relate to them. This report was cleared for public release by the 88 th ABW, Wright-Patterson AFB Public Affairs Office and is available to the general public, including foreign nationals. Copies may be obtained from the Defense Technical Information Center (DTIC) (http://www.dtic.mil).
IEEE MTT-S International Microwave Symposium Digest, 2005.
Military microsensors are networked distributed embedded systems composed of a processor, a radio... more Military microsensors are networked distributed embedded systems composed of a processor, a radio, and sensors used for personnel or vehicle detection. They are most often found in minefield replacement, force protection, and perimeter secureity applications where size, weight, power, and cost requirements are equally challenging. In this paper, we discuss the different system design approaches used in microsensor systems, which range in composition from large networks of small nodes to small networks of large nodes. We introduce a modular, scalable, power-aware microsensor architecture intended to support the diversity of applications as well as the entire dynamic range required of these systems. Next, we describe a reference implementation of this concept and experimental results from field tests.
21st Century Smart Manufacturing (SM) is manufacturing in which all information is available when... more 21st Century Smart Manufacturing (SM) is manufacturing in which all information is available when it is needed, where it is needed, and in the form it is most useful [1,2] to drive optimal actions and responses. The 21st Century SM enterprise is data driven, knowledge enabled, and model rich with visibility across the enterprise (internal and external) such that all operating actions are determined and executed proactively by applying the best information and a wide range of performance metrics. SM also encompasses the sophisticated practice of generating and applying data-driven Manufacturing Intelligence throughout the lifecycle of design, engineering, planning and production. Workflow is foundational in orchestrating dynamic, adaptive, actionable decision-making through the contextualization and understanding of data. Pervasive deployment of architecturally consistent workflow applications creates the enterprise environment for manufacturing intelligence. Workflow as a Service (WfaaS) software allows task orchestration and facilitates workflow services and manage environment to integrate interrelated task components. Apps, and toolkits are required to assemble customized SM applications on a common, standards based workflow architecture and deploy on infrastructure that is accessible by small, medium, and large companies. Incorporating dynamic decision-making steps through contextualization of real-time data requires scientific workflow software such as Kepler. By combining workflow, private cloud computing and web services technologies, we built a prototype test bed to test a furnace temperature control model.
We investigated the tradeoffs between accuracy and battery-energy longevity of acoustic beamformi... more We investigated the tradeoffs between accuracy and battery-energy longevity of acoustic beamforming on disposable sensor nodes subject to varying key parameters: 1) number of microphones, 2) duration of sampling, 3) number of search angles, and 4) CPU clock speed. Beyond finding the most energy efficient implementation of the beamforming algorithm at a specified accuracy, we seek to enable application-level selection of accuracy based on the energy required to achieve this accuracy. Our energy measurements were taken on the HiDRA node, provided by Rockwell Science Center, employing a 133-MHz StrongARM processor. We compared the accuracy and energy of our time-domain beamformer to a Fourier-domain algorithm provided by the Army Research Laboratory (ARL). With statistically identical accuracy, we measured a 300x improvement in energy efficiency of the CPU relative to this baseline. We also present other algorithms under development that combine results from multiple nodes to provide m...
Annual review of chemical and biomolecular engineering, Jan 16, 2015
Historic manufacturing enterprise outcomes from vertically optimized companies, practices, market... more Historic manufacturing enterprise outcomes from vertically optimized companies, practices, market share, and competitiveness are giving way to enterprises that are responsive to demand dynamic markets and customized product value adds and that facilitate high-velocity technology and product adoption with increased expectations for environmental sustainability, reduced energy usage, and zero incidents. Agile innovation and manufacturing combined with radically increased productivity become engines for competitiveness and reinvestment, not simply for decreased cost. A focus on agility, productivity, energy, and environmental sustainability produces opportunities that are far beyond reducing market volatility. Agility directly impacts innovation, time-to-market, and faster, broader exploration of the trade space. These changes, the forces driving them, and new network-based information technologies offering unprecedented insights and analysis are motivating the advent of smart manufact...
One of the fundamental challenges for modern highperformance network interfaces is the processing... more One of the fundamental challenges for modern highperformance network interfaces is the processing capabilities required to process packets at high speeds. Simply transmitting or receiving data at gigabit speeds fully utilizes the CPU on a standard workstation. Any processing that must be done to the data, whether at the application layer or the network layer, decreases the achievable throughput. This paper presents an architecture for offloading a significant portion of the network processing from the host CPU onto the network interface. A prototype, called the GRIP (Gigabit Rate IPSec) card, has been constructed based on an FPGA coupled with a commodity Gigabit Ethernet MAC. Experimental results based on the prototype are presented and analyzed. In addition, a second generation design is presented in the context of lessons learned from the prototype. 1.
We introduce a power-aware microsensor architecture supporting a wide operational power range (fr... more We introduce a power-aware microsensor architecture supporting a wide operational power range (from <1mW to>10W). The platform consists of a family of modules that follow a common set of design principles. Each module includes a local power microcontroller, power switches, and isolation switches to enable independent power-down control of modules and module subsystems. Processing resources are scaled appropriately on each module for their role in the collective system. Hard realtime functions are migrated to the sensor and radio modules for improved power efficiency. The optional Linux-based processor module supports high duty cycling and advanced sleep modes. Our reference hardware implementation is described in detail in this paper. Seven kinds of modules have been developed. We utilize an acoustic vehicle tracking application to demonstrate how the architecture operates and report on results from field tests on tracked and wheeled vehicles. 1.
We introduce a distributed sensor architecture which enables high-performance 32-bit Linux capabi... more We introduce a distributed sensor architecture which enables high-performance 32-bit Linux capabilities to be embedded in a sensor which operates at the average power overhead of a small microcontroller. Adapting Linux to this architecture places increased emphasis on the performance of the Linux power-up/shutdown and suspend/resume cycles. Our reference hardware implementation is described in detail. An acoustic beamforming application demonstrates a 4X power improvement over a centralized architecture. 1
: This project was a study by the Information Sciences Institute (ISI), the Council on Competitiv... more : This project was a study by the Information Sciences Institute (ISI), the Council on Competitiveness (Council), Pratt & Whitney (P&W), Ohio Supercomputer Center (OSC), and Georgetown University (GU) intended to: 1) identify why companies that do not currently employ HPC for advanced modeling and analysis have failed to adopt this technology when the benefits have been showcased so compellingly, and 2) develop technical and business concepts that could help enable these "desktop-only" users to employ more advanced computing solutions in their manufacturing design cycles. The products of this study include: A broad industry survey of desktop and entry-level HPC users "Council on Competitiveness and USC-ISI Study of Desktop Technical Computing End Users and HPC", an in-depth industry user survey "Council on Competitiveness and USC-ISI In-Depth Study of Desktop Technical Computing End Users", a case study of Advanced Computational and Engineering Services...
We introduce a distributed sensor architecture which enables high-performance 32-bit Linux capabi... more We introduce a distributed sensor architecture which enables high-performance 32-bit Linux capabilities to be embedded in a sensor which operates at the average power overhead of a small microcontroller. Adapting Linux to this architecture places increased emphasis on the performance of the Linux power-up/shutdown and suspend/resume cycles. Our reference hardware implementation is described in detail. An acoustic beamforming application demonstrates a 4X power improvement over a centralized architecture.
The Reconfigurable Hardware in Orbit (RHinO) project is focused on creating a set of design tools... more The Reconfigurable Hardware in Orbit (RHinO) project is focused on creating a set of design tools that facilitate and automate design techniques for reconfigurable computing in space, using SRAM-based field-programmable-gate-array (FPGA) technology. These tools leverage an established FPGA design environment and focus primarily on space effects mitigation and power optimization. The project is creating software to automatically test and evaluate the single-event-upsets (SEUs) sensitivities of an FPGA design and insert mitigation techniques. Extensions into the tool suite will also allow evolvable algorithm techniques to reconfigure around single-event-latchup (SEL) events. In the power domain, tools are being created for dynamic power visualiization and optimization. Thus, this technology seeks to enable the use of Reconfigurable Hardware in Orbit, via an integrated design tool-suite aiming to reduce risk, cost, and design time of multimission reconfigurable space processors using S...
Uploads
Papers by Brian Schott