Establishing Choices: Developing a Portfolio of Space System Options
Establishing Choices: Developing a Portfolio of Space System Options
Aerospace assists its customers in developing space system options to meet a variety of mission needs. To meet all mission needs requires a collection or portfolio of systems. The company has identified best practices for how to evaluate and manage these portfolios of projects as well as individual projects.
By Mark Maier
Some space system program offices are responsible for a single system development effort and are tasked and funded to build and deliver a specific system. Other program offices hold a portfolio of projects and are responsible for conceiving, approving, and developing multiple systems. The Aerospace Corporation assists these program offices in their efforts, whether single system or multiple portfolio holders. The challenges in conceiving and managing a portfolio of systems are distinct from those involved in a single system.
Organizations that develop space systems consider a variety of concepts to meet many different mission needs. Many more of these system options will be started than are finished, and only a few will advance into serious development. Even fewer of these will become flight systems.
Aerospace advises its customers on the structure and content of portfolios of systems ranging from early concept development to final flight systems. The company uses a variety of techniques for understanding, modeling, and building these portfolios.
Portfolio, Collaborative, and Families of Systems
When a group of systems is considered together as a whole, it is important to distinguish which category the group falls into. Three common types of groups are portfolios, collaborations, and families. A portfolio of systems is a group of systems managed collectively against an over-all budget to fulfill multiple missions. The individual systems within a portfolio may be unrelated, except by being members of the portfolio, and are able to operate alone.
In a collaborative system (or system of systems), the systems interact with each other to produce results that none can achieve alone, while they retain managerial and operational independence. Operational independence means that if one system becomes disconnected from the collective, it will continue to fulfill useful purposes on its own. Managerial independence means that each system has its own management chain that seeks the best for that system, not necessarily for the collective.
A family of systems is jointly designed, developed, and manufactured around a backbone of shared elements (usually parts or software). The goal of the collective design and production is to realize economies of scale while accommodating customization for individual stakeholders. This is a standard commercial response to an attempt to balance diversity of demand within economies of scale.
The Four Elements of a Portfolio
A portfolio of systems is managed through four elements: the project or system criteria, the portfolio criteria, the lifecycle model, and the business process. Organizations embody their mission by how they choose the four elements. The choices that make sense for a civilian science organization may be different from those of a military organization.
Project or System Criteria
The project or system criteria are the measures of what make a good project or system. They are measures of the appropriateness of individual concepts for inclusion in a portfolio of systems. The criteria come from the overall mission of the portfolio holding organization and typically require a deep understanding of that mission.
While specific project criteria are determined on a case-by-case basis, many generic criteria will appear repeatedly. The individual project criteria include value, cost and resources, domain and strategic fit, and risk.
Value criteria are at the core of space system development considerations. They measure what stakeholders are willing to pay for and what the system will deliver. These criteria are typically drawn from economic measures, top-down measures of mission capability, bottom-up measures of stakeholder satisfaction or engagement, problem-centric measures of solution completeness, and technology advancement.
Cost and resource criteria measure the expenditure of resources (e.g., money, people, and time).
Domain and strategic fit criteria measure the appropriateness of the system to the organization’s overall responsibilities.
Risk criteria measure the uncertainty an organization is willing to assume and its favored position along a risk/reward trade. Some organizations will favor sure things, even if of modest value, while others are tasked to take more risk with the potential for high-value reward.
A system or project’s evaluation criteria flow from the given organization’s strategic identity. This is a familiar concept covered in many systems engineering textbooks and is built into the acquisition process. A somewhat less familiar idea is that portfolios of systems also have evaluation criteria, and that the portfolio’s criteria are different from those of the individual elements. For a portfolio of systems, the criteria are typically drawn from:
- Mission coverage. Do one or more systems in the portfolio address each of the assigned missions?
- Redundancy. Is there multiple coverage of the most important missions to reduce risk?
- Lack of redundancy. Is there only one system covering each mission area because of direction to eliminate overlaps?
- Are the systems diverse with respect to suppliers, users, technologies, and funders?
- Are the systems diverse or concentrated with respect to risk?
- Some portfolio holders will want a mix of high- and low-risk projects.
- Others may want a nearly uniform risk profile.
- Are the risks from project-to-project decorrelated?
Many portfolios of systems consist of systems across a lifecycle, from very immature to operationally ready. In many cases, a basic objective of portfolio management is to shepherd system concepts from their earliest phases to final delivery. In these cases, part of the portfolio’s architecture is what lifecycle model a system will traverse during its time in the portfolio.
Systems can be built on a variety of program templates, including the build-it-all-at-once waterfall, functional increment spiral, bread board/brass board/flight spiral, protoflight spiral, and the concept development staircase spiral. In a portfolio whose intent is to shepherd concepts from low maturity to full readiness, the usual choice will be the concept development staircase spiral. The staircase involves building a system in a sequence of studies and prototypes, with each step increasing the maturity level of development, decreasing the risks, and allowing for a down select of inferior concepts. The system moves up a series of steps involving increased resources, with the first steps being studies and the final step being a full-scale development effort.
There is no universally accepted formalization of the steps in the concept development staircase, although some organizations have established formal staircases. Some useful guidelines for how to choose and manage the steps in the concept development staircase are:
- There should not be too many or too few steps. Typically, there are three to seven steps from entry of the concept to consideration and final deployment.
- The increase in costs between steps should never be more than a factor of ten.
- There should be explicit criteria for moving between steps, resulting in a written document. How formal the document is should be governed by the size of the commitments and the consequences of failure. Regardless of the rigor though, the results should be documented.
- The steps should flow directly from the types of risks the organization is willing to undertake and those it wants to avoid (e.g., technology risk versus cost risk versus user acceptance risk).
- No organization can avoid all risks (or it will never do anything new), but it should explicitly understand what types of risks it is willing to accept and what types it wants to avoid.
The business process determines how the portfolio manager obtains resources and moves projects along the lifecycle, and in particular how projects are selected for entry into or removal from the portfolio. The two basic business models are the budget allocation model and the entrepreneurship/return-on-investment model.
In the budget allocation model, the portfolio manager has a fixed, annual budget, and the manager’s responsibility is to allocate it to projects within the portfolio in a way that optimizes the portfolio criteria. A new project can be included in the portfolio only if an older project is removed or spun out. This is similar to the financial or business manager whose resource base is fixed and who works to maximize returns. Efficiency and staying below the budget ceiling drive evaluations. A highly attractive project above cost cannot be considered. This business model will also favor projects that do not have large cost deltas from year-to-year and are very predict-able. The process itself is inward looking and focused on optimization.
The entrepreneurship/return-on-investment model assumes the portfolio manager has very little fixed budget and wide authority to seek resources from outside stakeholders. Adding a new project to the portfolio does not come at the expense of an older project as long as a new funding source is identified. The manager will typically have some budget, otherwise it would be difficult to develop projects to the point that they could be presented, but the emphasis is on identifying and engaging stakeholders, not on allocation efficiency. The cost efficiency and absolute size of a project are much less important in this model. If the return-on-investment of a project is attractive to an outside funder, it can be pursued, regardless of cost.
In this case, the portfolio manager is either effectively an entrepreneur within a bureaucracy, or a manager of entrepreneurs. If the portfolio manager is a manager of entrepreneurs, he or she must be fully empowered to act as an entrepreneur. If it is only the portfolio manager who can sell projects to outside stakeholders, that person must fully embrace the entrepreneurial role.
Case Study: Operational Demonstration Missions
In this case study, a space systems organization built a diverse and successful portfolio of systems using the aforementioned methods. The engineering and management teams responsible for developing the options took a portfolio approach and determined the first three elements: project criteria, portfolio criteria, and lifecycle model. The business process gradually emerged.
The portfolio’s broad mission was to deliver innovative intelligence through remote sensing more rapidly and with a better operational profile than what had been done before. However, broad mission or vision statements provide only basic guidance to actually selecting projects for the portfolio. The meat of understanding the mission comes from defining the portfolio with respect to the four elements. In examining this case study, it is important to realize that the criteria are specific to this portfolio. Other organizations, having different missions, may have chosen different criteria, development sequences, and business processes, even though their choices would fit within the same four categories.
Aerospace was asked to help this program office develop a new portfolio. The office wanted to identify and develop a series of projects intended to end up in operational demonstrations with collection capabilities. The end results were to be operationally useful demonstrations, not just technology demonstrations. There was no clear definition of what areas should be covered, what technologies were of interest, and what stakeholders should be covered. There was an initial budget, but the office was expected to find groups of influential stakeholders who would generate new money for the effort. The office’s systems architecture and engineering group was thus tasked with building the portfolio of projects it would pursue. The challenge was how to build the portfolio and manage its evolution over time.
As the project progressed, the portfolio was managed by a small systems architecture and engineering team who advised the office director. The team consisted of no more than ten people who were responsible for modeling the portfolio and maintaining a continuous catalog of system concepts and evaluations. The initial development cycles were typically conducted by the systems team and then spun out to separate project groups when concept maturity was sufficient and support had been obtained.
The systems architecture and engineering team settled on five primary project criteria for the portfolio of systems. The criteria were:
- Intelligence value delivered. This was an assessment of what new intelligence value would be delivered and what outstanding intelligence problem would be answered if the system concept worked as well as could be reasonably imagined. The program office deliberately took an optimistic estimate for performance, but then separately evaluated risk of delivery in other criteria. The intelligence value was established against a modest subset of standing problems.
- Time to operation. Projects were selected that fit within a specific timeframe, based on the organization’s overall mission.
- Cost. As with time to operation, the organization had a minimum and maximum cost objective. The organization’s portfolio targeted total program cost within a flexible budget as a matter of overall placement within larger organizations.
- Security profile. An ideal project for the group required a particular security profile of accesses and authorities.
Technical feasibility. The ideal project for the organization was technically neither low risk nor very high risk. A specific level of technical risk was established that was represented by the overall maturity of the technology.
The intelligence value criterion deserves some additional explanation. Value assessments are typically difficult and subjective to define, or they devolve into business-as-usual assessments of collection against standing requirements. Aerospace built a simple, effective model called “relevance, impact, and innovation” to better assess candidates within these limitations.
Relevance measures the breadth of value achieved. (For how many of the designated problems and stakeholders was value delivered?) Impact measures how strongly the user’s processes were affected. (Does the new information change the stakeholder’s assessments and operations, or only make a marginal impact?) Innovation measures the newness of the information. (Does the information provide a larger volume of something that is already available, or for something that is a different measurement than what has already been provided?)
The program office also considered two other factors when rating these projects: platform and partnerships. Platform refers to the physical means by which a sensor is deployed and maintained in the operational area (e.g., a satellite or airplane). When it came to platform, the system concept had to eventually be compatible with the organization’s particular platforms. The office was willing to consider concepts that were not compatible with the standard platforms initially, but not if they had no hope of eventually becoming compatible with the organization’s overall platform mission. The office also preferred projects that came with a partner willing to participate, fund, and possibly jointly operate, at least in the early phases of the roll out.
It is important to consider how the lifecycle model was developed and used before examining how the portfolio criteria were constructed and used for this case study. Because of the project criteria, many of the highest rated system concepts had low technical feasibility, or had not been carefully evaluated enough to know how feasible they were. The lifecycle model had to support a spiral approach in which commitments were small in the beginning, and only increased when the evaluation made sense. This process was formalized into several steps/levels:
- Unstudied but of interest. The systems architecture and engineering teams met approximately every quarter to revise the list of concepts. To get on this list, someone on the systems team had to propose a concept and define it well enough so that it could be rated. The rating against the project criteria was done solely on the basis of information presented to the group without any formal study.
- Internally studied. The first level of study was an internal study. This meant management had approved that a systems team member spend up to a few months reviewing external information and making calculations to better understand the concept. At any given time, there were several internal studies going on. An internal study resulted in the production of a document, an analysis, and a presentation of the concept to the systems team. This often happened several times as questions were raised and different approaches were proposed. At the end of the process, the concept was rerated and placed on the internally studied list. Concepts that were internally studied could be advanced to a formal, funded external study.
- Externally studied. Leadership reviewed the internally studied list and selected those highly rated concepts for more study, as well as making them part of the budget allocation process. An external study involved a formal study name and outside contractors. A major goal was to gain a level of understanding that allowed a formal acquisition to occur if warranted. Once the external study was concluded, the concept was rerated and maintained on the externally studied list.
- A fourth list included those concepts that had been externally studied and selected as development projects, but were not in true full-scale development (technology development projects). A concept was added to this list if the external study concluded it had high value and fit the organization’s business model, but some element of the technology’s feasibility was in question, such that advancing to full development was not advised. Instead, support was gained to work on the technology issue, typically with an intermediate goal of a demonstration on a partner’s platform.
- The final list included those concepts that were funded to be developed into operational systems and were currently in development.
A major goal of the systems team was to ensure that the externally studied list had three to five well understood, high-value concept candidates that were ready to be advanced to a development project. The business process was largely bureaucratic entrepreneurship, and the organizational leadership presented concepts to obtain funding. The system team’s primary goal was to ensure that the leadership always had something to advocate and that anything selected from the list would result in a successful, valuable system.
In theory, a concept moved up the development staircase step-by-step. In some cases, this happened, while in other cases, the lifecycle process was abbreviated.
Given the lifecycle model’s structure, there was in practice a portfolio with different criteria at each level. While the systems architecture and engineering team came up with the portfolio criteria, it was more useful over time to see how the leadership’s decisions on what to sup-port, or not, revealed its real portfolio preferences. This fed back into the lifecycle process by helping to formulate a leading candidate list at each level that better matched the leadership’s revealed preferences.
Probably the most important lesson from the revealed preferences list was the leadership’s desire for a selection of feasibility and risk levels, and for platform focus. The leadership did not want a portfolio of either all high-risk or low-risk technologies, but preferred a mixture. There was also a clear disjunction between what some leaders said about preferences on platform flexibility, versus how concepts were picked. The expressed preference was not to worry about platform. If a concept worked on a partner’s platform or a local platform, it was acceptable. In practice, concepts that did not work on the local platforms did very poorly in leadership selection, and it was not productive to pursue them.
The lifecycle model discussion has already encompassed important elements of the business process for this case study. The first three steps were funded by the organization’s standing budget and additional budget allocation. Steps four and five were sometimes internally funded, but were more often sold externally on a return-on-investment basis. The organization’s director was responsible for selling concepts at steps four and five. The systems architecture and engineering team supported the director with approximately five externally studied concepts to sell. The team also refreshed the lists regularly as concepts were sold and became full-fledged projects; those that failed to sell were dropped from the active portion of the lists. Entry and exit happened in regular reviews. Successful concepts were advanced up the lists, while concepts in which study at a given level provided unfavorable information were mothballed.
A concept called sensor A went through the portfolio development process in textbook fashion. The original idea came from an external partner, but it worked its way through the systems architecture and engineering team in staged internal to external studies. The leadership selected it early on for advancement to a full program. However, the original concept was for a very narrowly focused mission solely using sensor A. After advancing to a full-fledged program, the leadership combined it with an additional, loosely related technology demonstration. This slowed the time-to-operation (a negative from the original rating), but brought in a politically important partner. The eventual system was successful, has led to follow-ons, and has influenced the architectures of future systems.
Another concept called sensor B went through much the same process, but ended differently. As the concept went through the external study step, it became clear that the base technology was not mature enough for implementation on the preferred platform. There was an alternative platform in a partner organization that was more suitable, but basic technology development was still required. This concept ended up at the fourth level in the lifecycle model, which was focused on technology development. It did not advance beyond that, partly because of how difficult the technology turned out to be.
Two other sensors skipped or abbreviated parts of the development process. Sensor C was immediately advanced to being a full study based on the partner’s enthusiasm and support. Fortunately, a full internal study was conducted before committing significant funds. This proved wise when the partner’s claims did not hold up to quantitative scrutiny. With sensor D, the external partner already had a very significant and long-standing technology demonstration program. The program office ended up carrying out an internal study for much longer than usual, tracking the technology demonstration program for more than a year. Much like sensor B, the office concluded that the concept was sound and potentially of high value, but much better suited for plat-forms other than those of interest to the organization. Having learned some lessons from the sensor B experience, the office recommended that another partner pursue the sensor D concept. The partner continued to follow it, although an inability to get sufficient resources to address basic technology issues has left the concept on the back burner.