Team,
In my last DRRS-N update to you in November, I emphasized the importance of timely and accurate reporting at the unit level. I also talked about the importance of drawing from lessons learned in the Fleet and feeding them back through the chain-of-command (the feedback loop).
Since that update, there have been several substantial changes. Most notable is that I have consolidated the DRRS-N responsibilities under a single, accountable person who now reports directly to me as a Special Assistant. This move was necessary in order to give DRRS-N the direct attention it needs (by me and my Special Assistant).
Now, it was apparent to me from the time we transitioned to DRRS-N that it was a work in progress, but the more I learned over the past four months, the more I became convinced it was not meeting the needs of our Type Commanders. Our ability to provide forces ready for tasking depends on Type Commanders having a clear and accurate view of their units’ operational readiness. When Commanders view a unit in DRRS-N, there should be no doubt in their mind that what they are seeing is based on fact and in no way differs from what they see when they visit the unit in person. On the contrary, I’ve found that there is too often a significant discrepancy between what DRRS-N tells me and the reality I find…and that is simply unacceptable.
To that end, I directed my DRRS-N Special Assistant, CAPT Skip Shaw, to develop a comprehensive plan to fix the operational issues in DRRS-N and get the program on the right path for long-term sustainment. Specifically, CAPT Shaw will be working with the TYCOMs to reduce complexity, institutionalize standards for the Commander’s Assessment, train the Fleet (a very important one!), and eventually transition the program management functions to an appropriate SYSCOM. (See attached slides for details)
I have also directed CAPT Shaw to work directly with my Type Commanders to ensure DRRS-N is accurately reporting their units’ readiness based on their standards.
Although we have much work to do on DRRS-N, I believe we have the right people in place with the right plan to get it done; we’re already seeing some improvements. SURFLANT has made great progress reducing the complexity for the end user by reducing the number of tasks they report against by up to half for some ship classes (e.g. DDGs). SUBLANT established an in-house DRRS-N Tiger Team and they’ve already started putting together a “DRRS-N for Dummies”-type guide that will help everyone, especially me! As you see, there is no shortage of advantages from working together on this…we all benefit from collaboration, coordination and communication.
All the best, JCHjr
15 March 2011
Subscribe to:
Post Comments (Atom)
5 comments:
Admiral:
GAO reports on military readiness since the mid 1990's have pushed at DOD to develop a more comprehensive readiness reporting scheme. New attributes above SORTs need to make the system be "close" to real time, help establish trends, allow analysis for predictability, and use advances in information systems and connectivity.
Readiness must be thought of not only as resources and training- but what we really produce when those get together: mission performance. And it must be able to be calculated above the unit level.
Mission-Essential Tasks -METs- give us those concepts expressed as Tasks, conditions, and standards and can be mapped against any Mission or mission area or capability. MOreover, they show how hte tasks link to other organizations in subordinate, supporting and supported relationships.
To meet the standard (under the given conditions) requires the right resources and training. SORTS covers those.
A missing link between DRRS-N (ESORTs) and DRRS-S is performance. That’s where METLs come in. We can report against Command-level performance standards - not just TTP execution.
Moreover, by fully constructing METLs IAW Joint and OPNAV guidance, we also create a way to visualize the mission (and the organization/Chain of Command and lines for coordination and control), value the contributions from all participants, verify progress ( measuring against the standards and keeping track), and validate Courses of Action (COAs).
One of the other missing links in SORTs/ DRRS-N is establishing trends for force-wide tracking.
Just like SORTs, an update in DRRS-N wipes out previous data. Our training systems though do retain training data we can begin to trend.
Fleet Cyber Forces has just led the “EW” in DRRS-N readiness reporting initiative- but it must be followed up by an aligned mission analysis which discovers the right way to articulate EW focused tasks, conditions, and standards across the force.
METL tools- The UJTL and UNTL are flexible enough to accommodate necessary changes. Your Commanders in charge of NMETLs (and the WCOEs, etc) can collaborate to more clearly state mission performance expectations- e.g. NMET “Standards” across the force- and use them to align training, readiness and future capability requirements.
Very respectfully,
DKBrown37
DKBrown37,
I appreciate your clear enthusiasm on this topic. Indeed, we need our Commanders fully engaged on how they articulate their standards, but I am concerned that we have focused too much on tasks and filling out required reports, and as a result lost sight of the bigger picture of operational readiness.
I understand DoD requires that we use METs, but I encourage my Commanders to think about them (vice simply accepting them as a requirement), and ask the tough questions, such as:
• What is essential? Is it even possible to break-down something as complex as naval warfare into tasks and still have it make sense?
• Do we understand the biases that arise when measuring these tasks? What does it mean when the same exercise could produce two different assessment scores?
• What does it mean when we average all the assessment scores? Are we forcing ourselves to always come up T2 (or “Q”)?
• Should the Warfare Commander be graded on a deficiency of unit-level training? How does that help the unit or the Warfare Commander improve? How does this information feed back into the training system?
• How do we measure intangible qualities such as leadership, judgment, and resourcefulness – and many other qualities that I have emphasized over and over again? How does a ship in fair condition with strong leadership compare to a ship in great condition with weak leadership? Will NMETs give us the answer?
Most importantly, do these assessments accurately reflect reality? Do they have any meaning from which I can make decisions? All too often, we focus on tinkering with these measurements, but we forget to check whether they actually provide an accurate picture of reality. I have frequently seen this disconnect myself, which is why I encourage my Commanders to look at their units in person.
I have been told more than once that METs simply need more work and tweaking to get them right. Well, we’ve been tweaking them for nearly a decade now, and we still don’t seem to have a clearer picture…and that’s why I will always rely, first and foremost, on the professional judgment of my Commanders. Thanks for taking the time to comment.
All the best, JCHjr
Admiral,
Having seen a number of the latest briefs from Navy leadership in DC and reading this article I wonder if our leadership is spending far too much time and money on the overlapping assessment programs (5 Rocks, Surfmepp, ABS, Sorts, mets- DRRS) and building up assessment teams that add more paper to the pile, without turning a single wrench. Our Old Engineering MTTs spent 1-2 days inspecting and writing down laundry lists, then the next 3 days assisting the crew in repairs or finding which parts needed replacement. They came back in a few weeks and followed up with hands on help. Our training teams today focus on making ships capable of putting together "realistic scenarios" when some of them can't field a trained fire team.Watching RMCs continually hiring more civilian managers, in anticipation that 3-4 years downstream we'll have sailors back at the I-level, is comical...GS-13s hired to supervise a single GS-12...and lots of them. I question the application of assests to getting hands on action in our ships. Phil could file the same report today that he gave you two years ago, with some addendums.
V/R Retired 0-6
Admiral,
I like the initiative you are doing. One thing to keep in mind is to assess readiness against something, like OPLANS. For a Unit Commander to say they can fly, track a submarine or shoot a missle successfully does not show readiness to fight an opposing force as per the current OPLANS.
Good Evening Admiral,
While attending a DRRS-S Operational employment course the opportunity to present the DRRS-N linkage to DRRS-S presented itself in the course while developing the DRRS Center of Excellence (DCE) for Europe.
Through the use of the Capability Trees “tool” we have linked directly UICs with missions assigned to the Navy commander in DRRS-S.
This Operational linkage has provided a level of excitement in the COCOM. If we can write the plans’ /annexes with the proper amount of detail (detailed planning) the circle will close the supporting supported loop for readiness reporting against aligned PRIMARS and the PESTOF.
Operational DRRS-N is the next greatest step. Moving the tool into the Numbered Fleet Commander realm will finish the growth cycle of the program from a TYCOM tool to an Operational tool and subsiquently across the entire Navy.
Fidelity in reporting is a must, assessment ratings have to go beyond the Power Point depth of button management and the copying and pasting of CASREPs.
V/R
Mase
Post a Comment