U.S. Air Force Summer Faculty Fellowship Program

U.S. Air Force Summer Faculty Fellowship Program

U.S. Air Force Summer Faculty Fellowship Program

U.S. Air Force Summer Faculty Fellowship Program

AFRL/RI Griffiss Business and Technology Park, New York

SF.10.11.B4921:Context Sensitive Information Visualization to Enhance Situational Awareness

Jason Moore

(315) 330-4192

Situational awareness is the "fabric" for collaboration and team synchronization in military operations. To be most enabling, its content and the presentation of that content must adapt to the information needs of individual team members, their tasks, and their current situation (context).

It must seamlessly bridge the strategic, operational and tactical levels of military operations supporting decisions and actions at all levels. We are looking for researchers to explore the science of adaptive, context sensitive visualization of complex data rich environments, to support team self-synchronization/situation awareness and develop the underlying science needed to engineer future military systems.

Research areas of interest within this topic include:

·        Visualization of complex information systems.

·        Various techniques for de-cluttering data and the visualization of that de-cluttered data.

·        Appropriate visualization abstractions that work over WebGL/Javascript or other browser enabled capabilities and languages

·        Composable visualization system interfaces that reduce the amount of user end programming, but still offer rich expressivity

·        Course of Action determination visualization system that presents the facets of information and the way the system derived the set and ranking of COAs

SF.20.01.B4122:Advanced Information Visualization and Human Computer Interaction

Jedrysik, P.

(315) 330-2150

In order to provide airmen with an information environment that is dynamic and tailorable based on information needs, we are developing advanced visualizations techniques and interactive displays. Some of our technical challenges include fast access to voluminous dynamic data; high fidelity representations; effective visual interfaces for analyzing large data sets; evaluation metrics for visualization success; effective interaction techniques; integrating large high-resolution displays into a seamless computing environment; and perceptually valid ways of presenting information on a large display. Researchers will investigate effective use of visualization hardware and software. Specific domains include: man machine models; large screen and handheld multi-touch displays; multi-modal interaction; continuous speech; natural language dialogue; eye tracking and gesture interpretation; intelligent interfaces and adaptive mediators; untethered pointing and interaction devices; 3D graphics and visualization; synthetic environments and virtual world C2 applications; display tiling and high resolution media; collaborative interaction and decision making; integrated C2 situational awareness across air, space, and cyber domains; and mission planning and rehearsal.

SF.20.01.B4195:Challenges in Massive Point Cloud Visualization and Analysis

Aaron McVay

(315) 330-4780

Historical LiDAR collection efforts have already generated massive point cloud datasets, and new efforts are collecting as much as 1 Terabyte of data per hour of flight.  Currently available applications/toolkits for processing point cloud data have several limitations which prohibit scalability, timely dissemination and analysis.  The focus of this research is on techniques that could be used on raw data during collection (or very soon after), enabling clients to gain access in near real-time to data for analysis or to improve situation awareness.  Research efforts include the design of server based techniques to minimize client systems processing and storage requirements, and development of visualization capabilities usable by client applications.  Research areas of interest include:

·        Visualization of point cloud data using level of detail, or decluttering algorithms

·        Techniques for fusion of supplementary geospatial data sources with point cloud data

·        Techniques for classifying 3D geometric patterns within point cloud data

·        Methods for point cloud change detection


·        Automation techniques to assist in 3D model generation from point cloud data

SF.20.01.B4001:Mission Driven Enterprise to Tactical Information Sharing

James Milligan

(315) 330-4763

Forward deployed sensors, communication, and processing resources increase footprint, segregate data, decrease agility, slow the speed of command, and hamper synchronized operations.  Required is the capability to dynamically discover information assets and utilize them to disseminate information across globally distributed federations of consumers spread across both forward-deployed tactical data links and backbone enterprise networks. The challenges of securely discovering, connecting to, and coordinating interactions between federation members and transient information assets resident on intermittent, low bandwidth networks need to be addressed. Mission prioritized information sharing over large-scale, distributed, heterogeneous networks for shared situational awareness is non-trivial.  The problem space requires investigation, potential solutions and technologies need to be identified, and technical approaches need to be articulated which will lead to capabilities that enable forward deployed personnel to reach back to enterprise information assets, and allow rear deployed operators the reciprocal opportunity to reach forward to tactical assets that can address their information needs. 

SF.20.01.B4980:Decentralized Planning for Command and Control

Kurt Lachevet

(315) 330-2896

In an effort to support the Air Force’s mission to develop robust autonomous Command and Control (C2) systems in contested environments, we are interested in furthering the identification of problems, and development of solutions, in decentralized planning for C2.  We are interested in planning solutions in resource-constrained environments (processing power, data, and communication restrictions) with time-sensitive goals.  The following topic areas are of interest as we seek to provide a decentralized C2, collaborative planning capability to enable continued execution of plans in a contested environment.

·         Mixed-Initiative Plan Adaptation – When communications with a centralized planning authority are compromised, the challenge becomes the continuity of operations at decentralized locations can become difficult.  Capabilities of interest include effective methods of plan adaptation that don’t require a complete re-planning phase in a centralized planning environment, and mixed-initiative plan adaptation solutions in a resource-constrained environment.


·         Plan Deconfliction – As plans are successfully adapted to ensure mission continuity, how is re-synchronization effected when communications between distributed/decentralized C2 components are lost or compromised, and then restored – if at all.  Local planning by distributed agents may be locally effective, but often leads to the need for later plan deconfliction and negotiation once communications resume and partial plans and plan fragments are aggregated. Finding effective ways to reduce the occurrence of initial plan conflicts as well as to minimize the amount of time required to de-conflict a set of partial plans is critical to time sensitive mission requirements.  We are interested in plan de-confliction and synchronization solutions enabling inter-plan collaboration, efficient deconfliction, and plan (re)synchronization for autonomous/decentralized C2.

SF.20.01.B4005:Wireless Optical Communications

Hughes, D.

(315) 330-4122

Malowicki, J.

(315) 330-3634

Quantum communications research involves theoretical and experimental work from diverse fields such as physics, electrical and computer science and engineering, and from pure and applied mathematics. Objectives include investigations into integrating quantum data encryption with a QKD protocol, such as BB84, and characterizing its performance over a roughly 30 km free space stationary link.

Free Space Optical Communication Links: Laser beams propagating through the atmosphere are affected by turbulence. The resulting wave front distortions lead to performance degradation in the form of reduced signal power and increased bit-error-rates (BER), even in short links. Objectives include the development of the relationship between expected system performance and specific factors responsible for wave front distortions, which are typically linked to some weather variables, such as the air temperature, pressure, wind speed, etc.

Keywords applicable to these studies are: quantum cryptography, free space laser propagation, Coherent state quantum data encryption, laser beam propagation through turbulent media, integration of quantum communications system with pointing, acquisition, and control system.

SF.20.01.B4006:Airborne Networking and Communications Links

Matyjas, J.

(315) 330-4255

Medley, M

(315) 330-4830

This research effort focuses on the examination of enabling techniques supporting potential and future highly mobile Airborne Networking and Communications Link capabilities and high-data-rate requirements as well as the exploration of research challenges therein. Special consideration will be given to topics that address the potential impact of cross-layer design and optimization among the physical, data link, and networking layers, to support heterogeneous information flows and differentiated quality of service over wireless networks including, but not limited to:

·        Physical and MAC layer design considerations for efficient networking of airborne, terrestrial, and space platforms;

·        Methods by which nodes will communicate across dynamic heterogeneous sub-networks with rapidly changing topologies and signaling environments, e.g., friendly/hostile links/nodes entering/leaving the grid;

·        Techniques to optimize the use of limited physical resources under rigorous Quality of Service

·        (QoS) and data prioritization constraints;

·        Mechanisms to handle the security and information assurance problems associated with using new high-bandwidth, high-quality, communications links; and

·        Antenna designs and advanced coding for improved performance on airborne platforms.

SF.20.01.B4007:Cognitive RF Spectrum Mutability

Matyjas, J.

(315) 330-4255

Gudaitis, M.

(315) 330-4478

When considering operations across terrestrial, aerial, and space domains, effective use of the limited Electromagnetic Spectrum (EMS) for a multitude of purposes is critical.  The combined pressures of increasing demand for services and less available bandwidth for all make it imperative to develop capabilities for more integrated, flexible and efficient use of available spectrum for all functions (communications, radar, sensors, electronic warfare, etc.) across all domains (terrestrial, aerial, and space).  In recognition of the need for affordable, multi-functional software-defined radios with spectrum agility and survivability in contested environments, this research effort seeks lightweight Next-Generation Software Defined Radio (SDR++) architectures and advanced waveform components for affordable solutions based on COTS and non-development items (NDI), relevant operational security, and appropriate trades in levels of software & hardware roots-of-trust. This will create an innovative high-performance flexible radio platform developed to explore the use of next-gen cognitive, smart-radio concepts for advanced connectivity needs across heterogeneous waveform standards and multiple EMS use-cases; while meeting tighter cost budgets and shorter time-to-fielding. The technology developments will support global connectivity and interoperability via multi-frequency/band/waveform reprogrammable radios for networked, multi-node aerial layer connectivity & spectrum mutability, providing system composability and engineered resilience.

SF.20.01.B4008:Next-generation Aerial Directional Data Link & Networking (NADDLN)

Matyjas, J.

(315) 330-4255

Rowe, N.

(315) 330-7047

Given the scarcity of spectrum, there is a desire to develop self-forming, self-managing directional tactical data links operating at higher frequencies. Directional networking provides an opportunity to increase spectral efficiency, support ad-hoc aerial connectivity, improve resistance to intended/unintended interference, and increase the potential capacity of the link. However, complexity is added to the pointing, acquisition and tracking (PAT) required to establish and maintain a network of directional links over omnidirectional systems. Research interests reside in (1) the ability to make real-time content/context-aware trades involving capacity, latency, and interference tolerance; (2) mission-aware link and network topology control; and (3) affordable apertures and PAT systems; ultimately, to deliver new capabilities for next-generation aerial directional data link & networking (NADDLN).

SF.20.01.B4111:Agile Networking for the Aerial Layer

Soomro, A.

(315) 330-4694

The characteristics of today's aerial layer networks are limiting effective information sharing and distributed command & control (C2), especially in contested, degraded, operationally limited (CDO) and anti-access area denial (A2/AD) environments, where the lack of interoperability and pre-planned/static link configurations pose the greatest challenges. Advanced research in wireless networking is sought to support aerial information exchange capabilities in highly dynamic environments. This includes but is not limited to: disruption/delay tolerant networking; radio-to-router interface protocols; opportunistic transport protocols; resilient data/message protocols and on-demand prioritization; dynamic performance enhancing proxies Spectrum use; infrastructure sharing and mesh networking.

SF.20.01.B4333:Situational Awareness and Resiliency in Cross Domain Security Systems

Fu, Y.

(315) 330-4950

Cross-domain guard systems are a key component within any cross domain information sharing capability. Due to the guard’s gatekeeper role between domains, it is a primary target for cyber attack from a low-side network trying to gain high-side access, or malware on the high-side trying to exfiltrate data to the low-side. Historically these systems have been designed as standalone units connecting the various security domains, but with separate management and reporting channels. They have little visibility of the status of their connected networks and the connected networks have little visibility on the status of them. Resiliency is an important trend in designing systems with regard to cyber defense. Simply put, resilience is the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation. Resilience is related to survivability, which builds on the disciplines of security, fault tolerance, safety, reliability, and performance. The approach assumes systems will be compromised to some extent and implements design strategies and techniques that support a balanced combination of protections, detections, and adaptive technical and operational responses that dynamically evolve in response to current and future cyber events. One important aspect of achieving resilient systems is to have good situational awareness of the environment and adapt to changes within that environment. This topic aims to view a cross-domain guard system as a key part of designing a resilient Multi-Level Security (MLS) system. Formalize techniques for collecting situational awareness information for the various connected security domains, sharing this information securely, and acting on this information based on policy in a specific guard implementation agnostic fashion. Research should examine the current state-of-the-art of malware sensing and reporting in a closed systems environment, guard status reporting, guard management infrastructures, and applicable resiliency techniques (Protect/Deter, Detect/Monitor, etc.). The expected outcome of this effort is a new guard architecture, or an extension to an existing guard architecture where the guard is capable of acquiring sensor information from the various security domains that it is connected to, and increasing / decreasing its security posture appropriately based upon what it can detect.

SF.20.01.B4016:Reactive Service Migration

Milligan, J.

(315) 330-4763

Reactive service migration involves service fault detection and fail-over mechanisms, information service workload migration strategies to relieve overloaded network resources, and pre-positioning of information and services by recognizing the usage patterns of information consumers to anticipate their needs ahead of time. Reactive service migration fail-over mechanisms might make use of workflow compensation, service redundancy, or other exception handling techniques. Workload migration may involve the use of load balancing techniques to achieve optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Pre-positioning of information and services might require the tracking and detection of events or changes in state which indicate an impending user need. In all cases, reactive service migration is concerned with optimizing the quality and availability of information management system services.

SF.20.01.B4017:Enhanced Information Streaming

Kohler, R.

(315) 330-2016

Enhanced information steaming deals with techniques to annotate and otherwise characterize the content of steaming data in order to enrich our ability to interact with the data in ways that go beyond the frames and timelines that current video interfaces impose upon the user. Although frames and timelines are useful notions, they are less well suited for other types of interactions with video. In many cases, users are likely to be more interested such things as motion, action, character, and theme. For example, finding a moment in a video in which an object is in a particular place may be of interest, or the goal might be to compose a still image from multiple moments in a video. Although it is possible to compute object boundaries and to track object motion, typically this information is not captured in a manageable way, nor do present interfaces utilize this information for user interaction. Of interest are ways to embed glyphs and graphics into steaming media (such as descriptive labels, illustrative sketches, path arrows indicating motion, etc.), associating metadata with the indexed content of streaming data, and making this additional information available to consumers for managing the playback and use of streaming information in ways that add value based on specific user needs.

SF.20.01.B4555:Dynamic Resource Allocation in Airborne Networks

Bentley, E.

(315) 330-2371

From the Air Force perspective, a new research and development paradigm supporting dynamic airborne networking parameter selection is of paramount importance to the next-generation warfighter. Constraints related to platform velocity, rapidly-changing topologies, mission priorities, power, bandwidth, latency, security, and covertness must be considered. By developing a dynamically reconfigurable network communications fabric that allocates and manages communications system resources, airborne networks can better satisfy and assure multiple, often conflicting, mission-dependent design constraints. Special consideration will be given to topics that address cross-layer optimization methods that focus on improving the performance at the application layer (i.e. video or audio), spectral-aware and/or priority-aware routing and scheduling, and spectral utilization problems in cognitive networks.

SF.20.01.B4020:Cyber Defense Research

Njilla, L.

(315) 330-4939

Cyber Defense is concerned with the protection and preservation of critical information infrastructures in order to ensure the United States' dependency on cyberspace remains beneficial and does not turn a technological advantage into a vulnerability.

This technology area seeks to: 1) protect our own information space through assurance, agility, denial, deception, and deterrence; 2) enable our system to automatically survive attacks through an innate ability to deal with unanticipated states and environments; 3) provide the means to identify, understand, attribute and localize vulnerabilities before they are exploited, and attacks as they occur; and 4) recover and reconstitute systems, data, and information states rapidly to ensure continuity of operations.

Fundamental research areas of interest within this topic include:

·         Methods for mission mapping and dependency analysis within complex systems; going beyond computer and network assurance to mission assurance.

·         Design of trustable systems composed of both trusted and untrusted hardware and software; study of virtualization and trusted platforms

·         Algorithms and innate mechanisms that enable systems to automatically continue correct operation when presented with unanticipated input or in the face of an undetected bug or vulnerability.

·         Techniques that can disrupt an attack during its early stages (reconnaissance, planning, and testing), such as polymorphism, agility, and randomization, at all layers of networking and computer architectures, to reduce the attackers' understanding of our systems and their ability to launch attacks, while maintaining our own situation awareness: “moving target defenses.”

·         The ability of information systems to “fight through” attacks, without operator intervention, in a contested environment characterized by “zero day” attacks.

·         Examination of assumptions, mechanisms, and implementations of security features that may be adequate for wired networks and devices but provide opportunities for attacks on wireless and mobile systems.

·         Theories of complex systems describing interactions of large systems and systems of systems that lead to better understanding of their emergent behaviors during attack and reconstitution; epidemiological models that may be used to predict system responses to Internet worms and coordinated attacks as well as analyses of self-healing and self-restoring systems.

Development of new cryptographic techniques is not of interest under this research opportunity.

SF.20.01.B4345:Market-Based and Game Theoretic Methods for Resource Allocation in the Cloud

Njilla, L.

(315) 330-4939

Information systems are continually expanding as evidenced by the doubling of Internet connections every year. Similar growth is exhibited by information systems in defense. The Air Force’s mission to fly and fight in Air, Space, and Cyberspace involve the technologies to provide information to the warrior anywhere, anytime, and for any mission. This far-reaching enterprise will necessarily span multiple networks and computing domains that include those that are commercial and exclusively military. As a result, many users with different goals and priorities vie for the communication and computing resources. Managing this vast system to ensure dependable operation that maintains users’ quality of service levels has led researchers to propose computational markets as a means for controlling the allocation of system resources. Economics has always been a factor in engineering. Because it is also the study of resource allocation problems, economics is sought to provide the answer to managing large-scale information systems. By introducing software agents, pricing mechanisms, and game-theoretic mechanisms, the computational economy will strive to exhibit the same phenomena as a real one; it will admit arbitrary scale, heterogeneity of resources, decentralized asynchronous operation, and tolerance of localized failures. These derived benefits are compelling and recent advances in cloud computing have created opportunities for the serious contemplation of building computational markets.

SF.20.01.B4567:Application of Game Theory and Mechanism Design to Cyber Security

Njilla, L.

(315) 330-4939

Cyber attacks pose a significant danger to our economic prosperity and national security whereas cyber security seeks to solidify a scientific basis. Cyber security is a challenging problem because of the interconnection of heterogeneous systems and the scale and complexity of cyberspace. This research opportunity is interested in theoretical models that can broaden the scientific foundations of cyber security and develop automated algorithms for making optimum decisions relevant to cyber security. Current approaches to cyber security that overly rely on heuristics have been demonstrated to have only limited success. Theoretical constructs or mathematical abstractions provide a rigorous scientific basis for cyber security because they allow for reasoning quantitatively about cyber attacks.

Cyber security can mathematically be modeled as a conflict between two types of agents: the attackers and the defenders. An attacker attempts to breach the system’s security while the defenders protect the system. In this strategic interaction, each agent’s action affects the goals and behaviors of others. Game theory provides a rich mathematical tool to analyze conflict in strategic interaction and thereby gain a deep understanding of cyber security issues. The Nash equilibrium analysis of the security games allows the defender to allocate cyber security resources, understand how to prioritize cyber defense activities, evaluate the potential security risks, and reliably predict the attacker’s behavior.

Securing cyberspace needs innovative game theoretic models that consider practical scenarios such as: incomplete information, imperfect information, repeated interaction and imperfect monitoring. Moreover, additional challenges such as node mobility, situation awareness, and computational complexity are critical to the success of wireless network security. Furthermore, for making decisions on security investments, special attention should be given to the accurate value-added quantification of network security. New computing paradigms, such as cloud computing, should also be investigated for security investments.


We also explore novel security protocols that are developed using a mechanism design principle. Mechanism design can be applied to cyber security by designing strategy-proof security protocols or developing systems that are resilient to cyber attacks. A network defender can use mechanism design to implement security policies or rules that channel the attackers toward behaviors that are defensible (i.e., the desired equilibrium for the defender).

SF.20.01.B4030:Cyber Security Research and Applications

Njilla, L.

(315) 330-4939

Securing cyberspace continues to be an active research area. Despite much progress, critically-important cybersecurity challenges remain. This is made evident by the many critical systems that have become exposed to new threats from vulnerabilities that malicious agents exploit. The resultant cyber-attacks have meant significant cost to private industries and government agencies. As a first step towards achieving the long-term and affordable goal of securing cyberspace, this research opportunity is interested in novel approaches to tackle a wide range of cyber security problems related to the following:

Cyber-threat information sharing

Cyber-threat monitoring

Mathematical approaches to hardware Trojan detection

Side channel attack in cloud computing

Security of online social networks

SF.20.01.B4303:Motion Imagery (or Video) Processing and Exploitation

Howlett, T

(315) 330-4592

Motion Imagery sources include everything from airborne collectors to YouTube.  New and innovative technology is required to exploit and extract the relevant information content and manage the whole exploitation process. Visual processing is the focus, but leveraging all aspects of the data is of interest (e.g. audio and metadata) as well as using any additional correlating sources (e.g. reference imagery or coincident sensors).  Both semi-automated and fully automated capabilities are of interest.  Emphasis will be on overcoming or working around the current limit of computer vision to lead to a useful capability for an AF analyst.  Sample topics of interest would be:  biologically inspired techniques, scene classification, event detection, object detection and recognition, optimization techniques, Bayesian methods, geo-registration, indexing, etc.

SF.20.01.B4336:Audio Processing

Haddad, Darren

(315) 330-2906

The Audio Group in AFRL/RIGC is involved in all aspect of speech processing and is a unique combination of linguists, mathematicians, DSP engineers, software engineers, and intelligence operators.  This combination of individuals allows us to tackle a wide spectrum of topics from basic research such as channel estimation, robust word recognition, language and dialect identification, and confidence measures to the challenging transitional aspects of real-time implementation, GUI design, and concepts of operations.  The Audio Group also has significant thrusts in noise estimation and removal, speaker identification including open-set identification, keyword spotting, robust feature extraction, language translation, analysis of stressed speech, coding algorithms along with the consequences of the compressions schemes, watermarking, co-channel mitigation, and recognition of background events in audio recordings.

SF.20.01.B4437:Communications Processing Techniques

Smith, D.

(315) 330-3474

We view communications processing as the gathering and exploitation of technical information derived from communication signals.  The communication processing we perform mainly deals with the exploitation of messages or voice information but excludes open radio and television broadcasts. To perform exploitation, we need to develop advanced technologies to intercept, collect, locate and process communication signals in all parts of the spectrum.  The objective is to maximize the information that can be extracted from this raw data.  This information includes:

·         Person or source of the communication

·         Location of the transmitter

·         Function of the transmitter

·         Radio Frequency (RF) and other technical characteristics of the transmission

·         Content of the transmission

·         The recipient of the transmission

The end result is the development of automated processes to extract, analyze, correlate, sort and report information.


The technical challenges include:  development of interference cancellation techniques/ multi-user detection (MUD) algorithms, beamforming techniques, hardware architecture and software methodologies, geolocation techniques and systems, and signal processing software.  Research into developing unique and advanced methods to collect, process and exploit communication signals in high density rapidly changing environment is of great importance. The research is expected to be a combination of analytical and experimental analyses. Experimental aspects will be performed via simulations using an appropriate signal processing software tool, such as MATLAB.

SF.20.01.B4438:Wireless Sensor Networks in Contested Environments

Huie-Seversky, L.

(315) 330-3187

Zachariah, Nishant

Sensor networks are particularly versatile for a wide variety of detection and estimation tasks.  Due to the nature of communication in a shared wireless medium, these sensors must operate in the presence of other co-located networks which may have competing, conflicting, and even adversarial objectives.  This effort focuses on the development of the fundamental mathematics necessary to analyze the behavior of networks in contested environments.   Security, fault tolerance, and methods for handling corrupted data in dynamically changing networks are of interest.

Research areas include but are not limited to optimization theory, information theory, detection/estimation theory, quickest detection, and game theory.

 Development of new cryptographic techniques is not of interest under this research opportunity.  


SF.20.11.B4039:Big Data Analytics for Activity Based Intelligence

Banas, Christopher

(315) 330-2202

AFRL seeks innovative research in the area Big Data Analytics for Activity Based Intelligence (ABI). More specifically, AFRL seeks automated or semi-automated procedures to characterize and locate activities and actions/transactions, identify and locate actors and entities conducting the activities and transactions, determine the existence, topology, leadership, and other characteristics of covert networks, understand the relationships between networks, and determine patterns of life from large amounts of externally observed data. Research interests also include the discovery and understanding of unknown activities and associated trends/patterns/relationships. In addition, these techniques should move beyond the limitations of traditional approaches to consider temporal dynamics and\or multi-modal networks and are most interesting when researched in the context of a variety intelligence sources and types and the challenges presented by “Big Data.”

SF.20.11.B4040:Foundations of Resilient and Trusted Systems

Drager. S.

(315) 330-2735

Research opportunities are available for T model-based design, development and demonstration of foundations of resilient and trustworthy computing, including technology, components and methods supporting a wide range of requirements for improving the resiliency and trustworthiness of computing systems via multiple resilience and trust anchors throughout the system life cycle including design, specification and verification of cyber-physical systems. Research supports security, resiliency, reliability, privacy and usability leading to high levels of availability, dependability, confidentiality and manageability. Thrusts include hardware, middleware and software theories, methodologies, techniques and tools for resilient and trusted, correct-by-construction, composable software and system development. Specific areas of interest include: Automated discovery of relationships between computations and the resources they utilize along with techniques to safely and dynamically incorporate optimized, tailored algorithms and implementations constructed in response to ecosystem changes; Theories and application of scalable formal models, automated abstraction, reachability analysis, and synthesis; Perpetual model validation (both of the system interacting with the environment and the model itself); Trusted resiliency and evolvability; Compositional verification techniques for resilience and adaptation to evolving ecosystem conditions; Reduced complexity of autonomous systems; Effective resilient and trusted real-time multi-core exploitation; Architectural security, resiliency and trust; Provably correct complex software and systems; Composability and predictability of complex real-time systems; Resiliency and trustworthiness of open source software; Scalable formal methods for verification and validation to prove trust in complex systems; Novel methodologies and techniques which overcome the expense of current evidence generation/collection techniques for certification and accreditation; and A calculus of resilience and trust allowing resilient and trusted systems to be composed from untrusted components.

SF.20.11.B4442:Formal Methods for Complex Systems

Rodriguez, D.

(315) 330-4280

Formal methods are based on areas of mathematics that support reasoning about systems. They have been successful in supporting the design and analysis of systems of moderate complexity. Today’s formal methods, however, cannot address the complexity of the computing infrastructure needed for our defense.

This area supports investigation on new powerful formal methods covering a range of activities throughout the lifecycle of a system: specification, design, modeling, and evolution. New mathematical notions are needed: to address the state-explosion problem, new powerful forms of abstraction, and composition. Furthermore, novel semantically sound integration of formal methods is also of interest. The goal is to develop tools that are based on rigorous mathematical notions, and provide useful, powerful, formal support in the development and evolution of complex systems.

SF.20.11.B4043:Trusted Software-Intensive Systems Engineering

McKeever, W.

(315) 330-2987

Software is a prime enabler of complex weapons systems and its fungible nature is key to the development of next generation adaptive systems.

Yet, software is the most problematic element of large scale systems, dominated by unmet requirements and leading to cost and schedule overruns. As the complexity of today's system lies in greater than 10^5 requirements, over10^7 lines of code, thousands of component interactions, more than 30 year product life cycles and stringent certification standards. The tools used to design, develop and test these complex systems do little to instill trust that the software is free from vulnerabilities, malicious code or that it will function correctly. Furthermore there is virtually no tool capable of detecting design flaws. The objective of the trusted software-intensive systems engineering topic is to develop techniques and tools to enable trust (with a focus on security and correctness) throughout the software lifecycle.

Areas of interest include: evidence-based software assurance; static analysis tools with a preference to analysis at the binary level; algorithm or design-level analysis; secure software development; model-based software engineering; correct-by-construction software generation;

SF.20.13.B0944:Many-Node Computing for Cognitive Operations

Renz, T.

(315) 330-3423

The sea change in computing hardware architectures, away from faster cycle rates and towards processor parallelism, has expanded opportunities for development of large scale physical architectures that are optimized for specific operations. Porting of current cognitive computing paradigms onto systems composed of parallel mainstream processors will continue in the commercial world. What higher cognitive functionality could we achieve if we take better advantage of physical capabilities enabled by new multi-processor geometries?

Perception, object recognition and assignment to semantic categories are examples of lower level cognitive functions. Assignment of valence, creation of goals and planning are mid level functions. Self awareness and reflection are higher level processes that are so far beyond current cognitive systems that relatively little has been done to model the processes. Often, models assume higher cognitive processes will emerge, once the computing system reaches some level of speed / complexity. The problem is that the computational power required exceeded the reachable limit of single processor architectures and probably exceeds the limits of conventional parallel architectures. This topic seeks to enable mid and higher level cognitive function by creation of new physical architectures that address the computation demand in novel ways.

We are interested in developing models for the computational scale of the mid and higher functions and processor / memory node architectures that facilitate cognitive operations by configuring the physical architecture to closely resemble the functional cognitive architecture, e.g., where each node in a network represents and functions as a processor for a single semantic primitive. What new hierarchical architectures could we design for million node systems, where the individual nodes may be small ASPs, with very fast communication between nodes? A project of interest would combine both sides, new algorithms for higher level cognitive functions and new architectures to enable the computation in a realistic time frame. AFRL/RIT has projects on line to enable million node systems.

SF.20.13.B0945: Nanocomputing

Van Nostrand, J.

(315) 330-4920

Advances in nanoscience and technology show great promise in the bottom-up development of smaller, faster, and reduced power computing systems. Nanotechnology research in this group is focused on the development of crossbar computing architectures which utilize existing nanotechnologies including nanowires, coated nanoshells, memristors, and carbon nanotubes and are scalable to 100x100 arrays. We have a particular interest in the modeling and simulation of architectures that exploit the unique properties of these new and novel nanotechnologies. This includes development of nonlinear sub-circuit models that accurately represent sub-circuit performance with subsequent CMOS integration. Also of interest are the use of nanoelectronics and thermal management techniques using nanotechnologies in 3D computer architectures.

SF.20.13.B0946:Quantum Computing Theory and Simulation

Alsing, P.

(315) 330-4960

Quantum computing research involves interdisciplinary theoretical and experimental work from diverse fields such as physics, electrical and computer science, engineering and from pure and applied mathematics. Objectives of AFRL’s Emerging Computing Technology Branch include the development of quantum algorithms with an emphasis on large scale scientific computing and search/decision applications/optimization, implementations of quantum computational schemes with low error threshold rates, implementations of quantum error correction such as topological protection, and the simulation of quantum circuits/computers and quantum error correction schemes with an emphasis on modeling experiments. Topics of special interest include the cluster state quantum computing paradigm, quantum simulated annealing, the behavior of quantum information and entanglement under arbitrary motion of qubits, measures of quantum entanglement, and the distinction between quantum and classical information and its subsequent exploitation.

SF.20.13.B0947: Compressive Sampling

Suter, B.

(315) 330-7563

As the size shrinks for future small autonomous UAVs, they will increasingly find it more and more difficult to support high capacity data links and large on-board processing requirements for data-heavy sensing applications. One approach to solving this problem is to develop compressive sampling architectures and algorithms which will reduce the communications bandwidth, while allowing simple encoding and adaptive decoding. This topic addresses the theory and applications of compressive sampling. This includes:

·        Development of a theoretical framework for compressive sampling. Some promising directions are based in part on the study of multilevel and nonconvex optimization.

·        Application of computational methods to advance the state-of-the-art in airborne networking. For example, the realization of rank deficient network coding as an  application of compressive sampling.

·        Application of compressive sampling to permit novel computational paradigms. Such paradigms show potential for a myriad of applications, including wireless parallel computers

SF.20.13.B0950:Quantum Information Processing

Fanto, M. 

(315) 330-4455

The topic of Quantum Information Processing is to be focused on Computational Methods and Architectures. It has been well established that a computer based on quantum interference could offer significant increases in processing efficiency and speed over classical versions, and specific algorithms have been developed to demonstrate this in tasks of high potential interest such as data base searches, pattern recognition, and unconstrained optimization.

However the present experimental progress, lagging far behind the theoretical, is at the level of several gates or Q bits. The entangled photon approach to quantum gates including quantum gates, cluster states, and Linear Optical Quantum Computing will be experimentally pursued with particular attention to scalability issues. Experience with generation and detection of entangled photons is essential for this interaction, with parametric amplification a plus.

Theoretical advances will also be pursued with existing and custom quantum simulation software to model computational speedup, error correction and de-coherence effects. Algorithm investigation will focus on hybrid approaches which simplify the physical realization constraints and specifically address tasks of potential military interest.

SF.20.14.B0851:Optical Interconnects

Osman, J.

(315) 330-7671

Our main area of interest is the design, modeling, and building of interconnect devices for advance high performance computing architectures with an emphasis on interconnects for quantum computing. Current research focuses on interconnects for quantum computing including switching of entangled photons for time-bin entanglement. Quantum computing is currently searching for a way to make meaningful progress without requiring a single computer with a very large number of qubits. The idea of quantum cluster computing, which consists of interconnected modules each consisting of a more manageable smaller number of qubits is attractive for this reason. The qubits and quantum memory may be fashioned using dissimilar technologies and interconnecting such clusters will require pioneering work in the area of quantum interconnects. The communication abilities of optics as well as the ability of optics to determine the current state of many material systems makes optics a prime candidate for these quantum interconnects.

SF.20.14.B0852:Neuromorphic Computing

Thiem, C. 

(315) 330-4893

Neuromorphic computing shows great promise in the development of intelligent systems able to imitate natural neuro-biological processes such as reasoning and perception. This is achieved by artificially recreating the highly parallelized computing architecture of the mammalian brain. In particular, neuromorphic computers are suitable for applications in pattern recognition and optimization, i.e. target finding, automated data processing, intelligence analysis, etc. In order to achieve high levels of intelligence within systems, neuromorphic computing exploits the characteristic behavior of novel complex materials and structures with advanced processing techniques to achieve very large scale integration with highly parallel neural architectures. This research effort will focus on mathematical models, computing architectures and computational applications to develop neuromorphic computing processors. Also of interest, is the development of neuromorphic computing architecture software emulation and hybrid VLSI CMOS architectures utilizing nano- scale technologies. Special emphasis will be placed on promising technologies and solutions to satisfy future Air Force needs employing intelligent systems to achieve the desired level of autonomy.

SF.20.14.B0853:Advanced Computing Processors Information Management

Ramseyer, G.


As the number of computing processors is increased for most applications, a situation is reached where processor information management becomes the bottleneck in scaling, and adding additional processors beyond these number results in a deleterious increase in processing time. Some examples that limit scalability include bus and switch contentions, memory contentions, and cache misses, all of which increase disproportionally as the number of processors increase. The objective of this topic is to investigate existing and/or to develop novel methods of processor information management for multiprocessor and many-processor computing architectures that will allow for increased scaling.

SF.20.14.B0854:Data-Efficient Learning

Seversky, L.

(315) 330-2846

Heim, Eric

(315) 330-7084

Many recent efforts in machine learning have focused on learning from massive amounts of resulting in large advancements in machine learning capabilities and applications. However, many domains lack access to the large, high-quality, supervised data that is required and therefore are unable to fully take advantage of these data-intense learning techniques. This necessitates new data-efficient learning techniques that can learn in complex domains without the need for large quantities of data. This topic focuses on the investigation and development of data-efficient machine learning methods that are able to leverage knowledge from external/existing data sources, exploit the structure of data, and/or the parameters of the learning models as well as explore the efficient joint collection of training data and learning. Areas of interest include, but are not limited to: Active learning, Semi-supervised leaning, Learning from "weak" labels/supervision, One/Zero-shot learning, Transfer learning/domain adaptation, as well as methods that exploit structural or domain knowledge. Furthermore, while fundamental machine learning work is of interest, so are principled data-efficient applications in, but not limited to: Computer vision (image/video categorization, object detection, visual question answering, etc.), Social and computational networks and time-series analysis, and Recommender systems.

SF.20.14.B0855:Complex Network and Information Modeling & Inference

Seversky, L.

(315) 330-2846

Huie-Seversky, L.

(315) 330-3187

Heim, Eric

(315) 330-7084

Recent advances in sensing technology have enabled the capture of dynamic heterogeneous network and information system data. However, due to limited resources it is not practical to measure a complete snapshot of the network or system at any given time. This topic is focused on inferring the full system or a close approximation from a minimal set of measurements. Relevant areas of interest include matrix completion, low-rank modeling, online subspace tracking, classification, clustering, and ranking of single and multi-modal data, all in the context of active learning and sampling of very large and dynamic systems. Applications areas of interest include, but are not limited to communication, social, and computational network analysis, system monitoring, anomaly detection, video processing.  Also of interest are topological methods such as robust geometric inference, statistical topological data analysis, and computational homology and persistence.  The exploration of new techniques and efficient algorithms for topological data analysis of time-varying and dynamic systems is of particular interest. Candidates should have a strong research record in these areas.

SF.20.14.B0856:Event Detection and Predictive Assessment in Near-real Time Complex Systems

Vegairizarry, A.

(315) 330-2382

Making best use of multi-point observations and sensor information for event detection and predictive assessment in complex, near real time systems is a challenge which presents itself in many military domains. The first step in tackling these challenges is to analyze and understand the data. Depending on the algorithm used to detect an anomalous event, the nature and extent of variable correlations must be understood. This research will consider methods to quantify the strength of the correlations of input variables to output variables and develop techniques to account for lag times in the data itself. This is no easy task since sensor readings and operator logs are sometimes inconsistent and/or unreliable, some catastrophic failures can be almost impossible to predict, and time lags and leads in real world systems may vary from one day to the next. After detecting where the strongest correlations exist, one must choose a model which can best assess the current conditions and then predict the possible outcomes that could occur for a number of possible scenarios. Scientific issues of interest include, but are not limited to (1) advanced statistical methods to determine dependencies between senor inputs and the combined effect of multiple-sensors (2) adaptive correlation analysis techniques which will evolve to discover new dependencies in time as conditions change (3) adaptive pattern matching methods to take correlated sensor inputs and characterize normalcy and anomalous conditions.

SF.20.14.B1061:Programming for Emerging Nonlinear Computer Architectures

Yan, L.

(315) 330-2756

Modern computer architectures are separated into multiple abstraction layers: hardware, firmware, operating system, middleware, applications, etc. In this organization, low level details are abstracted away from higher level users. For instance, application programmers can focus on designing and implementing new algorithms and functionality without having to worry about the intimate details of instruction set architectures, physical memory management, communications protocols, etc. While this organization has been great for implementing everyday applications, the abstraction layers have actually been a hindrance for specialized applications. In order to attain high algorithmic efficiency, scientific computing practitioners must understand the details of available instruction sets (e.g., MMX and SSE), pipelining, cache design and memory bus bandwidth, amongst others. Similarly, security implementers must be cognizant of any special hardware (e.g., TPM chips and AES-NI extensions) to attain high performance and security. The same applies to nonlinear computing as well. In other words, the abstraction layers do not exist for these specialized applications and neither do their benefits.

However, as performance and security have become mainstream problems, new middleware and programming paradigms have been introduced. For example CUDA and OpenCL are two new programming paradigms that help abstract away some of the details of stream programming. Users write code in a C like language, and the compiler takes care of data organization and stream processor allocation. Hadoop is another example of a middleware that abstracts away the details of parallel and distributed programming and exposes the much simplified map-reduce algorithm.

This topic seeks to research and develop techniques and tools for abstracting away the details of a nonlinear computing system. An example topic of interest is compilers or middleware that abstract away the dynamic nature of chaos computers. In this way, if chaos computing is used for code obfuscation, the compiler is, in essence, a program obfuscator. A second example is model based design and implementation for nonlinear computing components. In this example topic, the different nonlinear computing instantiations are formally modeled. These models are then used to compose applications and runtime code that abide by user specifications.

SF.20.14.B1062:Towards Precise Low Level Program Analysis

Yan, L.

(315) 330-2756

Program analysis has traditionally been separated into two categories – dynamic and static. In dynamic analysis the sample under test is executed and its runtime behavior is analyzed. In static analysis, the sample is analyzed at rest. The main benefit of static analysis is code coverage, (e.g., the full control flow graph of a sample can be built) however, its main disadvantage is the lack of runtime or concrete data values. Conversely, the advantage of dynamic analysis is the availability of concrete information and the disadvantage is the lack of coverage. Thus, program analysis in practice is likely to use both static and dynamic techniques.

There are also different dimensions to program analysis. Analysis-granularity is one of them. For obvious reasons, analyzing a program at a high level representation (e.g., source code) can benefit from the available contextual information which is lost when a program is analyzed at a lower level (e.g., assembly). This loss of high level information, in turn, leads to a loss of precision (i.e., increase in false positives). Unfortunately, low level analysis is the only viable approach for many applications. For instance, malware samples normally arrive as binaries and not as source code.

The main goal of this topic is to investigate techniques that can be used to increase the precision of low level analysis. To put it differently, how can we make low level analysis as precise possible with the upper bound being high level analysis? The proposed work should initially focus on individual sub problems in program analysis – information flow, control dependency, etc.

SF.20.14.B1063:Secure Processing Systems

Rooks, J.

(315) 330-2618

The objective of the Secure Processing Systems topic is to develop hardware that supports maintaining control of our computing systems. Currently most commercial computing systems are built with the requirement to quickly and easily pick up new functionality. This also leaves the systems very vulnerable to picking up unwanted functionality. By adding specific features to microprocessors and limiting the software initially installed on the system we can obtain the needed functionality yet not be vulnerable to attacks which push new code to our system. Many of these techniques are known however there is little commercial demand for products that are difficult and time consuming to reprogram no matter how much security they provided. As a result the focus of this topic is selecting techniques and demonstrating them through the fabrication of a secure processor. Areas of interest include: 1) design, layout, timing and noise analysis of digital integrated circuits, 2) Implementing a trusted processor design and verifying that design, 3) Selection of security features for a microprocessor design, 4) verifying manufactured parts, and 5) demonstrations of the resulting hardware.

SF.20.14.B1065:Mathematical Theory for Advances in Machine Learning and Pattern Recognition

Prater, A.

(315) 330-2804

To alleviate the effects of the so-called ‘curse of dimensionality’, esearcher have developed sparse, hierarchical and distributed computing techniques to allow timely and meaningful extraction of intelligence from large amounts of data. As the amount of data available to analysts continues to grow, a strong mathematical foundation for new techniques is required. This research topic is focused on the development of theoretical mathematics with applications to machine learning and pattern recognition with a special emphasis on techniques that admit sparse, hierarchical or parallelizable numerical methods. Research may be performed in, but not limited to: sparse PCA, generalized Fourier series, low-rank matrix approximation and compressed sensing. Proposals with a strong mathematical foundation will receive special consideration.

SF.20.14.B1067: Dynamical Reservoir Computing

Nathan McDonald

(315) 330-3804

Recent advancements in nanoelectronics, photonics, neuromorphic systems, and cognitive neuroscience are enabling the development of radically different computational architectures based on reservoir computing concepts. Such systems are theoretically capable of solving the toughest temporal/spatial classification and regression problems with Air Force applications focused on increased system autonomy and perception. This research explores a new class of computationally intelligent processers governed by the nonlinear dynamics within oscillating optical or electronic reservoirs. The nonlinear dynamics and delayed feedback (short term memory) of reservoirs enable networks to mimic transient neuronal responses and to project time dependent input into high dimensionalities for categorization by an outside classifier. Such hardware based reservoirs can operate near the edge of chaos providing extreme sensitivity to input variations for increased degrees of seprability between input signatures. In this context, the reservoirs function as time delayed recursive networks that utilize feedback as short term dynamic memory for the processing of time-series input signals. These systems offer potentially disruptive capabilities in real time signature analysis, time-series predictions, and environmental perception for autonomous operations. Interests associated with this topic include; exploration of the required properties and associated mechanisms to build efficient reservoirs, system modeling, spike-timing-dependent plasticity (STDP), memristive systems, and cortical architectures, with emphasis on bio-inspired computational schemes based on the physics of nonlinear systems.

SF.20.14.B1068:Quantum Networking with Atom-based Quantum Repeaters 

Soderberg, K

(315) 330-3687

A key step towards realizing a quantum network is the demonstration of long distance quantum communication.  Thus far, using photons for long distance communication has proven challenging due to the absorption and other losses encountered when transmitting photons through optical fibers over long distances.  An alternative, promising approach is to use atom-based quantum repeaters combined with purification/distillation techniques to transmit information over longer distances.  This in-house research program will focus on trapped-ion based quantum repeaters featuring small arrays of trapped-ion qubits connected through photonic qubits.  These techniques can be used to either transmit information between a single beginning and end point, or extended to create small networks with many users.

SF.20.14.B1069:Advanced High Speed Data Links

Salama, Y

(315) 330-4977

This in-house research effort focuses on very high speed data links (multi-gigabits) built on commercial standards such as IEEE std. 802.16 We are exploring the advantages of using orthogonal frequency division multiplexing and multi-access (OFDM, and OFDMA).  In order to achieve multi-gigabit performance, we are investigating the use of ultra wide band communication scheme with high order modulation techniques.  Several challenge topics need to be investigated in this project. These topics include, but not limited to:

Doppler Frequency spread for ultra wide band communication systems using OFDM/OFDMA in high mobility airborne environment

Peak-to-Average Power Ratio (PAPR) mitigation in OFDM communication system

Clock and Carrier recovery techniques in very high speed communication systems

Time and Frequency synchronization in OFDM/OFDMA communication systems

Real-time high efficiency Forward Error Correction (FEC) techniques using state-of-the-art FPGA design

SF.20.14.B1070:Advanced Event Detection and Specification in Streaming Video

Aved, A.

(315) 330-4320

Video is leveraged for a variety of monitoring tasks, ranging from the monitoring of bridges, traffic and the interior of large buildings, to providing situational awareness in real time via unmanned aircraft. Current systems require human operators who must maintain high levels of constant vigilance monitor. The goal is to develop algorithms that operate in real time (or near real time) that can detect and characterize both short-term events (recognized in a single or few consecutive frames of video) and/or long-term activities (composed of one or more correlated events) that can occur in a timespan of seconds, minutes or hours. Also of interest are enabling technologies and techniques; to include but not limited to the specification of events via high level query languages, query optimization, multi-INT data fusion (e.g. to include combining multiple modalities of data into a stream amenable to off-line search and retrieval or subsequent processing in a cloud computing environment). Architectures of interest include both CPU and GPU, as well as cloud/distributed systems and mobile devices.

SF.20.14.B1072:Feature-Based Projection of Threats

Sheaff, C.

(315) 330-7147

Methods have been developed to detect anomalous behaviors of adversaries as represented within sensor data, but autonomous projections of actual threats to US assets require further investigation and development.  The proposed research will investigate and develop both the foundation and the algorithms that can predict the type of threat a red asset poses to a blue asset. The inputs to the system may include: 1) an indication/warning mechanism that indicates anomalous behavior exists, and 2) a classification of the type of red/blue asset.  Approaches to consider include, but not limited to, projections based on offensive/defensive guidance templates and techniques associated with machine learning.  The approach can be applied to any threat domain.  The example that follows illustrates application to U.S. satellite protection.  The offensive template determine the type of threat.  The classification algorithm provides notification of the type of asset it is.  The classification approach is employed to (for example) determine whether the asset is intact or a fragment, its control states, the type of control state, and whether it is a rocket body, payload, or debris.   n example of an offensive assessment is a mass-inertia configuration change in an active red asset that is specific for robotic arm-type movements.  Mechanisms such as templates are used to project whether or not this asset is a threat by comparing configuration changes with known threatening scenarios through probabilistic analyses, such as Bayesian inferences.  Robustness tests may be employed as well. For example, a threat can be simulated that is not specific to one template.  The question to be answered is:  can a combination of the templates handle this case?  The defensive portion must provide countermeasures, i.e. as in the case of a blue satellite, thruster burns to move away from possible threats

SF.20.16.B0001: Efficient Experience Learning for Machine Intelligence

Robert Wright

(315) 330-2055

The development of robust autonomous systems capable of learning and adapting to complex novel situations is the ultimate goal of research in machine learning and artificial intelligence. Advancements towards this goal will have pervasive impacts and will enable breakthroughs in application areas such as unmanned vehicle control, autonomic computer systems, and automation. The U.S. Air Force recognizes this potential and has made research in this area a priority. Central to any robust autonomous system is an ability to learn from experiences in order to adapt and optimize performance in a given task. Experience in most domains is a finite and valuable commodity and must be used as efficiently and effectively as possible. The objective of this research topic is to research novel approaches for using experience data. Potential areas of interest include:

  • Sample efficient learning algorithms
  • Off-policy learning approaches
  • Active sampling for policy and representation learning
  • Selective sample repository maintenance
  • Multi-agent cooperative learning
  • Parallel and distributed learning systems

SF.20.16.B0003 - THz Communication – Materials and Mechanisms

Michael Medley

(315) 330-4830

Rebecca Cortez

(315) 330-3150

THz communication holds promise to increase wireless data rates over short distances. This may be achieved using emerging technologies based on nanostructured materials. Potential material candidates may include carbon based nanomaterials or other plasmonic materials. This research effort focuses on understanding the materials and mechanisms which allow for the transmission of THz frequency signals. Topics of interest include theoretical examination of the physical mechanisms responsible for propagating the THz signals from source to receiver; design and fabrication strategies for component development (transmitters or receivers) capable of supporting THz frequencies; and analytical exploration of component efficiencies to minimize free space loss.

SF.20.16.B0005 - Analysis of Large Data Sets

Walter Bennette

(315) 330-4957

To support the Air Force mission, the processing and exploitation of Intelligence, Surveillance, and Reconnaissance (ISR) data is crucial. Currently, a massive amount of ISR data is collected by the Air Force to enhance situational awareness, but abilities to analyze and extract meaning from this data need to be improved. At present, motivating dataset types include text corpora, network traffic, and citizen science projects such as eBird. We are seeking individuals with expertise in performing exploratory analysis, classification or regression modeling, and knowledge extraction from these types of very large datasets. Areas of particular interest include but are not limited to:

  • Visualizations for quality assessment and knowledge extraction
  • Optimization for data mining; examples include training set selection, feature selection, and model building
  • The use of big data technologies such as Hadoop or Apache Spark through the R programming language
  • Techniques to build confidence in the results of data analyses to non-experts

SF.20.17.B0001 - Advanced Data Sketching, Reduction, and Analysis for Handling High Volume, Velocity, Variety and Veracity of Data to create Value

Mark Alford

(315) 330-3802

Situational awareness is necessary for accurate command level decision making in military operations. To be most effective, as close to real time analysis of incoming data must be done with minimal error. Large scale data has four major attributes that must be taken into consideration; Volume, Velocity, Variety and Veracity. Automated algorithms must seamlessly identify, evaluate and prioritize importance based on operational parameters. A summer faculty researcher is needed to explore large scale data analysis algorithms. The goal is to support command level decision making by increasing real time situational awareness. This will be done by exploring data sketching and reduction, clustering, advanced data visualization, and analysis tools. Faculty researcher will verify the parallel computing capabilities of software packages and developing Artificial Intelligence (AI) based algorithms for practical implementation of data sketching and reduction. They will also explore clustering as it is applied to different types of data such as text, images, video and numerical data. Research advanced data visualization techniques for increased analyst efficiency. Investigate tools for automated analytics and value visualization for increased situational awareness and more accurate command level decision making. Research interests in this topic include:

  • Independent verification and application of XDATA software
  • Development of algorithms for coordination of software packages
  • Various techniques for clustering complex datasets
  • Visualization of complex datasets
  • AI based algorithm selection for sketching, reduction and analysis

SF.20.17.B0002 - Multi-Domain Mission Assurance

Jason Bryant

(315) 330-7670

In an effort to support the Air Force's mission to develop Adaptive Domain Control for increasingly integrated Mission Systems, we are interested in furthering the identification of problems, and development of solutions, in increasing Full-Spectrum Mission Assurance capabilities across joint air, space, and cyberspace operations. Modern multi-domain mission planning and execution integrates tightly with cyber and information infrastructure. To effectively direct and optimize complex operations, mission participants need timely and reliable decision support and an understanding of mission impacts that are represented and justified according to their own domain and mission context. We are interested in understanding, planning, and developing solutions for Mission Assurance that supports operations requiring Mission Context across multiple domains, and spans both Enterprise and constrained environments (processing, data, and bandwidth). The following topic areas are of interest as we seek to provide solutions that are domain adaptive, mission adaptive, and provide rich, critical situational awareness provisioning to Mission Commanders, Operators, and technologies that support autonomous Mission Assurance.

  • Summary, Representation, and Translation of Multi-Domain Metrics of Mission Health - Expansive Mission Assurance requires adequate mechanisms to describe, characterize, and meaningfully translate mission success criteria, mission prioritization, information requirements, and operational dependencies from one domain to another in order to react to events, deliver them appropriately to mission participants, and thereby increase the agility, responsiveness, and resiliency of ongoing missions.
  • Multi-Domain Command and Control information Optimization - Currently, information can be disseminated and retrieved by mission participants through various means. Increasingly, mission participants will face choices of what, how, and where information will reach them or be pushed back to the Enterprise. Deciding between C2 alternatives in critical situations requires increased autonomy, deconfliction, qualitative C2 mission requirements, and policy differentials. We are seeking representations, services, configuration management, and policy approaches towards solving multi-domain multi-C2 operations.
  • Complex Event Processing for Multi-Domain Missions - The ability to better support future missions will require increased responsiveness to cyber, information, and multi-domain mission dynamics. We are seeking mission assurance solutions that process information event logs, kinetic operation event data, and cyber situational awareness in order to take data-driven approaches to validating threats across the full-spectrum of mission awareness, and justify decisions for posturing, resource and information management, and operational adjustments for mission assurance.
  • Machine Learning for Mission Support - Decreasing the cost and time resource burdens for mission supporting technologies is critical to supporting transitioning to relevant domains and decreasing solution rigidity. To do this requires advanced approaches to zero shot learning in attempts to understand mission processes, algorithms to align active missions with disparate archival and streaming information resources, analysis of Mission SA to determine cross-domain applicability, and autonomous recognition of mission essential functions and mission relevant events. Additionally, ontologies and semantic algorithms that can provide mission context, critical mission analytics relationships, mission assurance provenance and response justifications, as well as mission authority de-confliction for intra-mission processes and role-based operational decisions, are topics that would support advanced capabilities for advanced mission monitoring, awareness, and assurance decisions.

SF.20.17.B0003 - Risk-Aware, Distributed Intelligence, Surveillance, and Reconnaissance (ISR)

Jeffery Hudack

(315) 330-2963

Future missions will need to achieve information gathering and processing in a distributed environment while being resilient against dynamic and adverse conditions. Research areas of interest include distributed control, risk-aware planning, and federated information management. Distributed control considers methods of coordination between multiple systems that are not governed by a central authority. Risk-aware planning is a form of robust optimization that considers the inherent risks in the environment, both natural and adversarial. Federated information management includes the transport, processing and fusion of data through the interaction of multiple dispersed devices. These research areas are of growing interest applied to command and control, man-machine interactions, and the maturation of unmanned air vehicles. Evaluation of distributed methods will be performed using empirical simulation, quantitative dynamic models, and/or verification via formal proof.

SF.20.17.B0004 - Web Browser Extension Development and Data Extraction

Michael Manno

(315) 330-7517

The primary objective of the this effort is to extract information from documents in real time, without the need to install an additional a software package, specialized development, or trained agents to each source.

Seeking data from multiple documents is a manual, time consuming, undocumented process which needs to be repeated every time an update, or change, to that data is requested. Automating this process is a challenge because the documents routinely change. Sometimes, the mere act of refreshing a web page changes the document as the ads cycle. Such changes are damaging to most of today's web scraping techniques. The lack of data from failed updates during the extraction process creates many problems when attempting to update data that is located from open source web sites, and documents. Each time a document changes, the method associated with that document responsible for the data extraction also needs to change. This diverts time away from an analyst, as the analyst begins spending more time managing data, opposed to performing the intended analysis. Web scraping, or extracting data from a document, typically requires training or expert analysis of each source before data can be parsed from it. This means documents must first be identified before a script or agent can be written to extract data from it. This process does not allow a user to discover a document and immediately being parsing data from it. Services that provide access to data such as RSS feeds, Web Services, and APIs, are useful, but are not necessarily what is needed by the requestor. For example, the Top Story from a news publisher may be available as an RSS feed, whereas the birth rate of the country may not be.

This assignment will focus heavily on enhancing a web browser extension designed to leverage the Air Force Research Laboratory, AFRL in-housed developed functionality, and web browser functionality, in an extension called Atlas. The extension will be used for routine extraction of data elements from open source web pages/documents, and be developed for the Firefox web browser. In addition to Web Browser extension development, this assignment will also be involved with adding additional functionality such as visualization enhancements, search and transposition, crawl, and a process for identifying similar data. Consideration will also include expanding to additional web browsers such as Internet Explorer.

SF.20.17.B0005 - Cyber Threat Avoidance and Mining Process

Laurent Njilla

(315) 330-4939

The security landscape is becoming more sophisticated, with the emergence of new communication technology platforms and information, such as mobile computing, cloud computing, online social networks, cyber physical system and the Internet of Things (IoT). An alleged unmotivated form of vandalism before on the Internet has become a diverse ecosystem of cyber-crime, where providers and consumers come together to achieve various end-goals and utilities. The persistence, complexity, and capabilities of today's adversaries are limitless, and their threat does not only affect individuals or organizations, but also nations.

Traditional firewalls are not sufficient to ensure the security of computer networks. Due to the interconnection of numerous networks, the cyberspace is becoming vulnerable to cyber attacks. There is a need to strike a balance between firewalls and other protective mechanisms. Amongst these mechanisms, intrusion detection and the profiling of attackers are mostly used despite their limitations. However, some data mining techniques such as 1) deep neural networks, 2) clustering and machine learning can be used. The use of Process mining techniques combined with game theoretic concepts can bridge the gap for discovering security breaches and learning the attackers’ modus operandi.

Moreover, research work have proposed sharing cyber threat information as a mean to prevent future cyber attacks and revenue loss by finding and repairing the vulnerabilities proactively. Information sharing also minimize the cost of investment in developing countermeasure to cyber attacks. This research opportunity is interested in applying new techniques such as Category theory, channel theory, and information flow to tackle cyber threat information sharing and side channel attack.

SF.20.17.B0006 - Extracting Knowledge from Text

Aleksey Panasyuk

(315) 330-3976

The vision is to go from working with documents such as PDFs, webpages, etc. to a knowledge representation that is continuously being updated using new documents. In this vision the user primarily works with a knowledge graph in order to understand the overall message the documents are reporting on. User needs an ability to go from facts to original documents and understand similarities and differences among the multitude of documents and how each contributes to the overall scenario that is being portrayed. The challenge is the different types of information that may be of interest to the user, including but not limited to: (1) people and groups, (2) events (who, what), (3) geo-spatio-temporal information (where, when), (4) causal explanations (why, how), (5) facilities and equipment, (6) modality and beliefs, (7) anomaly, novelty, emerging trends, (8) inter-relationships, entailments, coreference of entities and events, (9) disfluencies/disjointedness, (10) dynamic, perishable, changing situations. Current state of the art knowledge representation systems suffer from varied prose styles and document types such that the extraction accuracy is insufficient to solve current needs of being able to sift through a very large, constantly growing store of knowledge for important insights.

SF.20.17.B0007 - Data Driven Model Discovery for Dynamical Systems

Joseph Raquepas

(315) 330-4263

The discovery and extraction of dynamical systems models from data is fundamental to all science and engineering disciplines, and the recent explosion in both quantity and quality of available data demands new mathematical methods. While standard statistical and machine leaning approaches are capable of addressing static model discovery, they do not capture interdependent dynamic interactions which evolve over time or the underlying principles which govern the evolution. The goal of this effort is to research methods to discover complex time evolving systems from data. Key aspects include discovering the governing systems of equations underlying a dynamical system from large data sets and discovering dynamic causal relationships within data. In addition to model discovery, the need to understand relevant model dimensionality and dimension reduction methods are crucial. Approaches of interest include but are not limited to: model discovery based on Taken’s theorem, learning library approaches, multiresolution dynamic mode decomposition, and Koopman manifold reductions.

SF.20.17.B0008 - Uncertainty Propagation for Space Situational Awareness

Joseph Raquepas

(315) 330-4263

One of the significant technical challenges in space situational awareness is the accurate and consistent propagation of uncertainty for a large number of space objects governed by highly nonlinear dynamics with stochastic excitation and uncertain initial conditions. Traditional uncertainty propagation methods which rely on linearizing the dynamics about a nominal trajectory often break down under a high degree of uncertainty or on long time scales. In addition the data uncertainty is usually poorly characterized or the data may be sparse or incomplete. Many recent developments which attempt to address these issues such as the unscented Kalman filters, Gaussian sum filters, and polynomial chaos filters tend to be ad hoc approaches with limited foundational rigor. The objective of this topic is to research accurate, computationally efficient, and rigorously validated methods for uncertainty propagation for the dynamical systems which address the nonlinear nature of the underlying dynamics, and the high degree of uncertainty and lack of completeness in the data. Of interest are approaches which leverage methods of modern dynamical systems theory, theory of stochastic differential equations, unique methods for numerically approximating solutions to the Fokker-Planck equation.

SF.20.17.B0009 - Development of Space Operations Courses of Action Under Constrained Resources using Reachability Sets and Computational Topology

Tyler Diggans (315) 330-2102

Dale Richards (315) 330-3014

Johnathan Coker (315) 330-3204

As the complexity of the command and control of space assets becomes more complex, and space itself becomes more congested, the development of feasible and appropriate courses of action becomes more complicated and constrained. The generation and assessment of courses of action needs to be more robust, adaptive to dynamic operational environments and responsive to previously unseen and potentially unique scenarios. Management and control of sensing and tracking systems as well as actual control of operational assets must become more agile and responsive to changes in available information and dynamically varying temporal constraints on the decision making process.

An important aspect of this process is the use of models to determine the reachability set of a satellite in orbit, i.e., the set of all possible locations and orbital trajectories at a later time, given some information about the satellite's energetic capabilities. Current research in this area involves computational topology and adaptive representations of high-dimensional simplicial complexes (SC). The propagation of this set through time and space can involve algorithms of exponential computational complexity taking into account the last observation and computational iteration. This can be mitigated by implementing heuristic merging criterion for vertices, using an adaptive dynamic representation of the SC, or other yet to be explored novel approaches. Due to the continuous utilization of such an algorithm in support of real-time operations, and subsequent real-time monitoring of results, optimization of both the computational and time resources is critical.

In this and other space command and control tasks, availability of information is often limited and is dependent on time as well as other physical (non-mathematical) constraints. This work will involve the application of the following research areas:

  • Orbital Mechanics and Reachability Sets
  • Computational Topology
  • Constrained Algorithmic Optimization
  • Statistics and Machine Learning
  • Abstraction of space C2 systems to physical and behavioral models
  • Management of electro-magnetic / optical parameters – both sensing and operational communications

SF.20.17.B0010 - Ultra-broadband Networking: mm-Waves, THz Band and Beyond

Ngwe Thawdar

(315) 330-2951

Today’s increasing demand for higher data rates and congestion in conventional RF spectrum have motivated research and development in higher frequency bands such as millimeter-wave, terahertz band and beyond. New developments in device and physical layer technologies promise to relieve the overcrowded spectrum at lower frequencies as well as enable new high-bandwidth applications that are not feasible with current wireless technologies. The focus of this research is to develop novel networking solutions that will exploit the full potential of unprecedentedly large bandwidth offered by the recent developments in high-frequency bands. Traditionally, wireless networks have been designed with the major constraint being available bandwidth. We are interested in new and novel protocols at higher layers that do not hold the same assumption but address the challenges stemming from the peculiarities of channel physics at high frequencies. For example, very high path loss caused by atmospheric and molecular absorption at these frequencies have effectively shorten the transmission range. This, in turn, calls for deployment of highly directional antennas and massive-MIMO arrays as well as relaying and multi-hop communication schemes. In this new paradigm, our research areas of interest includes but are not limited to:

  • Link layer protocols where nodes do not need to aggressively contend for the channel but have to consider challenges stemming from channel characteristics and use of directional antennas.
  • Transport and network layer protocols that can support very high data arrival rates without data loss or queueing issues.
  • Topology control of ultra-dense networks consisting of active and passive relay nodes and nodes using directional and massive-MIMO antenna arrays.
  • Synchronization and medium access strategies that consider the effect of very high-speed data rates (Tbps or at least multi-Gbps) in high-speed airborne networks.
  • Compatibility with legacy frequency band access to provide spectrum diversity to the system.
  • Cross-layer protocols that take into account of challenges and opportunities at higher frequency bands.


Dr. Andy Noga
Ms. Elizabeth Kelly
Information Institute
Telephone: (315) 330-4775
E-mail: rrs.iiweb@us.af.mil