A black background with orange and white text

Description automatically generatedA person in a black shirt

Description automatically generatedA poster on the wall

Description automatically generated

Dave Bakken’s Consulting Home Page

I consult on applied distributed computing and next-generation power grid communications and coordination, especially when they need to adapt to resource change, failures, or attacks.

Caveat emptor: this page is not pretty (yet), but <ahem> maybe you are actually looking for deep technical expertise, not website design ... Imagine that!

New item: my one-day training class, “Distributed Computing for Decentralization of Grids and for IoT”. 90% of it applies for non-power settings: application engineers who have not been exposed to distributed computing and middleware but need to get a basic understanding and some key insights, for the IoT, or just in general.

Contents

Overview

Distributed Computing in a Nutshell

Consulting History

Summary of Positions Held

Training Class

Letters of Recommendation

So what do I REALLY do?

A Few Pictures of Me

For More Information

My Business Card (Front and Back)

Overview

Here is a condensed overview of my qualifications.

The Big Picture

I am a senior technical consultant on distributed computing, and one of the leading experts (and arguably the leading one) on next-generation power grid communications and coordination for wide-area networks.  I have worked on the power grid for 25 years and been a DARPA PI (with Cornell and University of Illinois as subcontractors) working developing a large framework, Quality Objects (QuO), for middleware over the wide area. QuO has flown in a Boeing experimental aircraft, and was evaluated for use in the Navy’s DD-21program (which got sidetracked), used to integrate 7 entities’ QoS-related mechanisms, as a prototype demonstration for the US Navy (SPAWAR), and was the technical centerpiece of the DARPA Quorum program, etc.

While at WSU I worked with both novel communications systems and coordination. For the communications part, my team and I have researched and built a mature data delivery system for the power grid, called GridStat. which is the only communication technology fast and reliable enough to justifiably be used by Remedial Action Schemes (RAS), last-resort tripwires for the grid.  It has been deployed in a trial between US energy labs PNNL and INL; had live data from our regional utility, Avista, for 15+ years; been  deployed in DETERLab[1],[2],  a leading cyber-physical hardware-in-loop testbed (with real networking and power hardware augmented with simulation); and had roughly $3M total invested in it by DOE, NSF, DHS, NIST, and others over the last 25 years. GridStat is a bit of an odd duck (platypus?): a real-time, rate-based publish-subscribe overlay network utilizing multiple parallel paths and per-subscriber-per-update QoS. It is also Software-Defined Network (SDN) technology, albeit at the middleware, not network, layers. It does its (static) routing based on a data variable, not an IP address.

I have also developed DCBlocks, a coordination framework developed for RAS, prosumers, and many other uses in IoT for other industries.

Finally, in my early years at WSU, on the non-power front middleware front, I developed MicroQoSCORBA. 

Strengths

·         Ability to dig down deep to tease out underlying assumptions, misunderstandings, hidden or missing requirements; and, ultimately, how they apply to the next greater context, which is often how projects fail.

·         Deep, intuitive, and experiential knowledge of applied distributed computing

·         Offering insightful questions and observations outside of my core areas, and even often outside of computer science and electric power

·         Experience and knowledge of the power grid’s ICT realities, having dug in deeply and visited utilities, funding agencies, university research programs, etc. way more than all but a handful of people with this specific applied knowledge. (Most ICT professors who claim power expertise juggle it with 4-5 other application areas that they use as examples for their research. Every year I get a few papers assigned to review from such people who have made really screwy assumptions about the realities of power grid ICT.)

·         Secured funding from a wide range of sources: DARPA, ARPA-E, NSF, DOE, DHS, US Air force (AFRL,Rome LAB), Cisco, HP Labs, RTE France, and Norwegian Research Council, and a number of others.

·         A talented facilitator, who quickly understands the big picture and sees connections (especially interdisciplinary) that other people miss. I can bring diverse teams together and then help keep them focused on the overall goals and mission.

·         Collaborated with many Fellows of the National Academy of Engineering, IEEE, and ACM (highlighted in my CV linked below).

·         Understand the business world much better than your garden-variety academic-only STEM professor: I’ve worked full-time jobs in industry both pre- and post-PhD, and have read The Economist weekly on and off (mostly on) since Boeing (40 years), and been on the Board of Directors for a cyber-security startup.

·         Writing successful, deeply technical proposals and technical analyses.

·         Critiquing technical standards related to middleware, and power grid communications.

·         Working knowledge of the power grid that is almost unheard of for an applied (or even theoretical) computer scientist, at least one that is expert and distributed computing.

·         The ability to explain complex distributed computing concepts to people without a computer science degree yet without being superficial: they gain some valuable intuition.

·         I quite openly admit my weaknesses….

Weaknesses

·         I don’t know every 4th  level detail of every distributed algorithm, which is about all that many theory-only professors do. But I know what they are, or at least one level above them, where they fit in a coherent distributed architecture, and how and where to dig to learn when I need to. And I have experimental and deployment experience.

·         I can be informal at times, when it doesn’t seem (too) inappropriate to me, but never to a level that distracts others from the goals of any meetings. I am known for my tongue-in-cheek deadpan humor.

·         I don’t follow all the latest nth level details of cloud and middleware implementations, but I know that overall space and dig in when necessary, then offer an evaluation from my decades of experience knowing and advancing the state of the art.

·         I don’t suffer fools gladly but have the savior faire to bite my tongue when necessary (my health care plan fortunately covers tongue re-attachment).

·         No formal software engineering training, except for a voluntary evening class at Boeing in 1985. But the higher-order bit retained from my time at Boeing was the desperate need for software engineering.

·         No current software development training or experience. I used to be very good at systems programming (concurrent programming, distributed programming, operating systems, language runtime systems, etc.) but that skillset has long withered, and I used gnu emacs, makefiles, etc. on Unix. I remember the concepts and lessons learned very well, I just can’t sling code efficiently anymore (and would not take a job that relegated me to that, anyway: they would not be using most of my talents that can benefit others much more).

For more information about me—if you’re not bored yet—see the longer bio blurb in my training class flyer linked below.

Distributed Computing in a Nutshell

Computer networks get data from Point A to (multi-)Point B with some statistical properties: delay, drop rate, error rate, etc. Distributed computing, then, is a layer above the network that answers the question:

Now that we have this (inter)network, how do we best use it?

 In part this means this: how do we help programmers program it a lot easier (really with a lot less difficulty: it’s still hard)?

It involves cooperating processes running on a network and communicating only by messages (e.g., no shared memory), in the face of different kinds of failures.

Part of this includes application-level help to coordinate, synchronize, replicate, and reach consensus (agreement) on a value or decision.  It also includes middleware: a layer of software that makes programming across a network much less difficult (its still not easy).  It handles heterogeneity (diversity) for different kinds of network technology, CPU architecture, operating system, programming language, and even different vendors’ implementations of the same standard. It also provides a much higher level of programming (e.g, distributed objects or distributed events via a human-readable name) compared network programming (a buffer of bytes via an IP addresses that is just a number with no inherent meaning).

For more information, see an encyclopedia article I wrote on middleware a quarter of a century ago.  Its example technologies are a bit dated, but the first page absolutely nails what middleware is, in a way that people without a computer science degree, or even programming experience, can understand.

Virtually every other industry uses middleware unless there is a good reason to use the socket level network programming (e.g., ultra-high performance).

Consulting History

I’ve consulted a bit here and there while a full-time professor at WSU, but until I retired my very busy research projects kept me too busy to do much more. But now, after taking some time off, I have plenty of time and energy! And I am looking forward to digging into, and contributing to, some new, interesting, and challenging projects while contributing to my employers’ success.

Siemens, Munich HQ (2018-2020)

·         Helped evaluate the effects of grid decentralization on the power grid. Lead drafting a long paper Grid Decentralization: Challenges, Solutions, and a Transatlantic Summit; involved Washington State University, Technical University of Munich, Siemens, and others. (Sadly, that whole initiative was shelved due to the COVID-19 pandemic.)

·         Liasson to (and presented seminars at) KU Leuven, Technical University of Munich, RTWH Aachen, and European Commission.

Intel (2015)

·         Used my vast contacts in the power sector to help start their (now widely successful) Energy Central, by lining up the very early set of content providers.

Harris Corporation, Florida (2010)

·         Consulted on power grid data delivery issued.

·         Lead, and wrote most of, a major proposal to US Department of Energy.

Real-Time Innovations (2010)

·         Consulted on power grid realities.

·         Taught a half-day course on IT realities in the power grid.

Note: RTI is the market leader in the OMG’s publish-subscribe Data Distribution Service (DDS), which I also used in my programming assignments in my distributed computing classes in multiple years.

Pacific Northwest National Laboratory (2015)

Supported the GridOPTICS Software System (GOSS) Architecture for the Power Grid project

·         Studied its APIs, test suite, and example programs

·         Interviewed PGT designers, especially to identify and analyze design decisions (both explicit and explicit) and tradeoffs hidden behind the APIs.

Amazon(2003)

·         Consulted on fault-tolerant multicast algorithms and systems.

·         Fun fact: I linked them up with Ken Birman’s group at Cornell, told them they should work with these guys, too.  Then, less than a year later, Ken’s senior research scientist, Werner Vogels, is named CTO of Amazon.

Summary of Positions Held

·         Assistant Professor & Associate Professor & Professor (now Emeritus), School of EECS, WSU, June 1999 to present.

·         Visiting Professor, University of Oslo & Simula Research Lab, Norway, AY2004-2005

·         Scientist Distributed Systems Department, BBN Technologies, July 1994 to June, 1999.

·         CTO, NASPInet Consulting Services, 2010-present

·         Graduate Research Assistant, University of Arizona Computer Science. Besides my doctoral dissertation work on FT-Linda, I worked on the distributed language Synchronizing Resources for a few years. I parallelized its runtime system to exploit multiple processors, and did some low-level work such as porting co-routines to new architectures (involving assembler routines that enter as one task and exit as another).

·         Software Engineer, Boeing, Seattle, WA, June 1985 to July 1988.

o   My Data_Flow system is still in use as of 2020. It was not asked for: I saw a desperate need, designed it, and coded it on the side, to help teach myself Unix and C. It identifies producer-consumer relationships in legacy FORTRAN code, which had a huge number of variables in COMMON (shared memory) due to the limitations of minicomputer debuggers. It then generates new common blocks based on which module (autopilot, autothrottle, etc.) produces it. This was a prerequisite for parallelizing the simulations.

Training Class

I offer a day-long training class, Distributed Computing for Decentralization of Grids and for IoT (US Letter, A4). It is intended to give insights into distributed computing for power engineers, non-IT application engineers, regulatory folks, IT people who never had a class in distributed computing (which is probably 80% to 90% of them) or had such a class but it was theory-oriented (they often can’t see the architectural forest from the trees of algorithms that comprise it, because their professor could not either, focusing in on the minimal publishable delta of a theory-only algorithm that is never implemented, or at best implemented in throw-away code nobody else could possibly use).  For information on taking a class, email training@naspinet.com.

Letters of Recommendation

Current

All of these letter writers have collaborated with me and known of my work and background for 25+ years. (You can download all references at once here.)

1.      Dr. Douglass Schmidt, This is by far my deepest reference, especially on the BBN QuO work. He is Operator of Test and Evaluation for the US DOD and long been the leading expert on middleware and its implementation from the academy. He evaluates my overall distributed computing expertise and also my impact on multiple DARPA programs when developing QuO at BBN.

2.      Prof. Mustaque Ahamad, Regents Professor at Georgia Tech. He is very distinguished in distributed computing and cybersecurity. He is a co-founder and chief scientist for multiple startups.

3.      Prof. Ken Birman, Professor at Cornell University. He has led software that has been deployed in the New York Stock Exchange, the French Air Traffic Control System, the AEGIS warship, and other mission-critical settings, and is co-founder for multiple startups.

4.      Mr. Jeff Dagle, Chief Electrical Engineer at US Dept. of Energy’s  Pacific Northwest National Lab. He has also contributed to a National Academies Report on Grid Resiliency.

Right After BBN (2001–2005: no power references yet)

These references have the writers’ affiliations as of 2001–2005. Note: most are converted into pdf from hi-res scans from 2001, so not quite as clean and sharp as today’s spiffy print-to-pdf ones above.

  1. Dr. Aad van Moorsel, HP Labs (which back then was a huge research center; now it seems to mainly be advanced development or at least immediately-appliable research).  Dr. van Moorsel is a recognized expert in fault-tolerant computing and serves on several prestigious conference program committees.  He has been very aware of me and my work since circa 1997 at BBN and is aware of my work at WSU.  I have visited HP Labs and given technical presentations on three separate occasions, and I have had numerous technical discussions with him at conferences. HP Labs was also an early funder of my WSU research.
  2. Dr. Richard Schantz, BBN Technologies.  Dr. Schantz is considered one of the pioneers of distributed computing, having done one of the first PhD dissertations in distributed operating systems in 1974  and having conducted research on middleware for wide-area networks since 1979 (he is widely considered to be the Father of Middleware with his Cronus distributed object system)..  He was the principal architect of Cronus, a middleware framework that was a predecessor to CORBA and has been deployed widely by the military. Cronus was given recognition by the Smithsonian in the late 1990s. Indeed, when I was there, circa 1998, the Air Force was begging BBN to get paid to keep maintaining Cronus for a few more years, because they felt that CORBA, which had been out since 1989 and was also based on actual implementations, was not yet mature enough. BBN obliged, despite having higher priority uses for its personnel.
  3. Mr. Gregg Tally and Ms. Terry Benzel, Network Associates Inc. Labs.  Mr. Tally is a DARPA PI who has 18 years of experience for NAI Labs and its predecessor, Trusted Information Systems.  Ms. Benzel when I consulted was VP of Advanced Security Research for Network. I knew Terry well from PI meetings for the Air Force and DARPA, and when she knew I had left BBN and was available for consulting she immediately pursued this.  I consulted for her distributed security group, led by Mr. Tally, on research in the DARPA OASIS project.  Our research provided tolerance of Byzantine failures (including malicious takeovers by hackers) to CORBA and involved an advanced prototype.
    1. Update (2025): Terry Benzel has now long been at USC’s Information Sciences Institute, and has testified before Congress. I have visited them and investigated possible collaborations in recent years.
  4. Mr. Dave Lounsbury, VP of Advanced Research & Innovation and Mr. Doug Wells, Research Director; both of the Open Group.  The Open Group, in Cambridge Mass., is a deeply technical organization that used to be called the “Open Software Foundation”.  In the 1990s it developed vendor-neutral version of Unix with support from HP, IBM, Sun, and others.  It also does a lot of applied DARPA research and integration of others’ DARPA research, and in this context, I interacted quite a bit with both signatories.  They are very familiar with my BBN work on QuOand we teamed up with 3-4 other organizations on a proposal to DARPA in circa 2002. 
  5. Mr. Mark Riggins, Amazon.  He noted that they found me via the premiere distributed computing conference, ICDCS; the paper (with Georgia Tech and IRISA in France; I was a voting member of the doctoral committee for the first author):

Krishnaswamy, V. and Ahamad, M. and Raynal, M. and Bakken,D.  Shared State Consistency for Time-Sensitive Distributed Applications”, in Proceedings of the Twenty First International Conference on Distributed Computing Systems (ICDCS-21), IEEE, Tempe, Arizona, April, 2001.    Also reprinted as sole article in Newsletter of the Technical Committee on Distributed Processing, IEEE Computer Society, Fall 2001. Acceptance Rate: 32% of 217 submissions (from 19 countries). HONOR: Best Paper Award. Note: this is the most prestigious general conference in distributed computing

10. Mr. Ronald Riter, who was my mentor at Boeing (19851988) and with whom I have kept in touch and have interacted with many times since.

West Point (1981)

11. BGEN Joseph P. Franklin (West Point Commandant of Cadets, 1981).  This speaks to integrity etc. And it seemingly got my security clearance processed in record time at BBN!

So what do I REALLY do?

Here is what I have in my smart-aleck email .sig:

GridStat Project: Since 1999, cheerfully dragging the wide-area data delivery services of the electric power grid -- kicking and screaming -- into the mid-1990s.  ETA: 2030 (for 10% penetration of mid-1990s middleware technology).

Circa 2016 I saw a demo by a mid-sized power grid vendor. It was really cool: they had a bunch of generators, and they used, for the first time, simple publish-subscribe middleware to let the new generators announce their availability to the management system, and then be integrated to support the mission of the confederation of generators, which is essentially an electricity delivery service.

This was a huge advance in many ways, and they were justifiably proud of it. Indeed, when I started visiting utilities and vendors in 1999, I would often shake my head in disbelief to see that they were using static configuration files. Sure, they are one (small step) better than having the configuration information hardcoded in  (i.e., compiled into) the program.  Ergo, this simple use of pub-sub was a big step.

But here’s the thing: pub-sub then had been best practices in every other industry for 20 years, and 25 years for the US military (at least the US Air Force). Not just state of the art (theory or new barely-used implementations), but actual state of the practice, used for some time. And they seemed to be utterly aware of generalized naming in distributed computing systems, which had also been widely used for over 30 years (starting with X.500, which BBN implemented for many years).

I have a lot more work to do to get past 10% penetration! C’est la vie! And do check out my training class.

A Few Pictures of Me

·         Plenary presentation at Smart Grid World Forum, Beijing, China, 2011

o   At the Podium

o   VIP badge

·         Seminar announcement at BBN in 2019 for my seminar, Distributed Coordination (IF Secure and Smart) Enables the Internet of Things. The guy on the left is Ray Tomlinson, who wrote the first inter-computer email program, decided on the ‘@’ convention for email addresses, and sent the first email message in 1971–1972; his office was just down the hall from mine.  Then is Ed Campbell, who was my department manager at BBN and in this picture was then CEO of BBN.  Finally there is some unknown skinny guy from South Chicago who reportedly enjoys pickup hoops games. Below them is some ugly guy who probably broke the camera and caused people to lose their lunch when they saw this announcement monitor in the cafeteria; as of early 2025, This rogue researcher has somehow avoided tort claims for this vomitus.

For More Information

Collaborations with Highly Distinguished Researchers

I’ve collaborated many times with Fellows of the US National Academy of Engineering and IEEE or ACM.  A tabulation (see my CV, linked below, for the color-coded details):

Fellow Category/Kind of Collaboration

US National Academy

IEEE or ACM

Grant and Donation Proposals

10

5

Refereed Journal Papers

6

9

Magazine Articles

2

2

Invited Conference Papers

0

2

Refereed Conf/WS papers

7

17

Other Papers

1

3

Book Chapters

3

5

Other Publications

9

6

 

Annotated Select Publications

Please email me if you don’t have access to these (e.g., no IEEE publications access).

  1. Towards Enhanced Power Grid Management via More Dynamic and Flexible Edge Communications, invited paper for (First) Fog World Congress, IEEE and OpenFog, Santa Clara, CA, Oct 30Nov 1, 2017.

·         This is, by request, an expanded version of the paper with URLs, which the IEEE does not allow in its publications. I did this as a service to the Fog and Edge communities.

2.      D. Bakken (ed). Smart Grids: Clouds, Communications, Open Source, and Automation, CRC Press, 2014, ISBN 9781482206111.

  1. D. Bakken, A. Bose, C. Hauser, D. Whitehead, and G. Zweigle. “Smart Generation and Transmission with Coherent, Real-Time Data”.  Proceedings of the IEEE, 99(6), June 2011, 928951. 

·         Considered a seminal work on wide-area power grid communications requirements, design requirements, etc.

·         I was directly invited to write this paper in 2010, showing my long-held credibility in power grid communications.

·         Proceedings of the IEEE is not just for power, but for all of the IEEE’s 39 wide range of societies. It is considered the most prestigious journal in all the IEEE.

·         Cited 247 times as of 2023 (per google scholar)

4.      H. L. P. Banerjee, S. Noddodi, A. Srivastava, D. Bakken, and P. Panciatici, “On the need for robust decentralized coordination to support emerging decentralized monitoring and control applications in electric power grid,” in Proceedings of the Fourth Grid of the Future Symposium, CIGRE, Chicago, Oct 2015.

  1. K. Tomsovic, D. Bakken, M. Venkatasubramanian, A. Bose.  Designing the Next Generation of Real-Time Control, Communication and Computations for Large Power Systems”, Proceedings of the IEEE (Special Issue on Energy Infrastructure Systems), 93(5), May, 2005. 
  2. Zinky, John A. and Bakken, David E. and Schantz, Richard E., “Architectural Support for Quality of Service for CORBA Objects”, Theory and Practice of Object  Systems (Special Issue on CORBA and the OMG), 3:1, April 1997, 55–73. 

·         I wrote 90% or more of this paper (Zinky contributed a bit more to QuO, but has dyslexia so does not write a huge amount; Schantz was not involved with the details a lot: more the architecture, goals, constraints, reality checks, etc.).

·         Cited 703 times as of 2023 (per google scholar), even though it is not published in one of the well-read (IEEE or ACM) journals.

·         This is considered the seminal work on wide-area network (WAN) Quality of Service(QoS). It describes what QoS over WANS can, and should, be.

7.      David E. Bakken, Richard E. Schantz, and Richard D. Tucker.  “Smart Grid Communications: QoS Stovepipes or QoS Interoperability”, in Proceedings of Grid-Interop 2009, GridWise Architecture Council, Denver, Colorado, November 17-19, 2009.  

·         Best Paper Award for “Connectivity” track

·         This is (as of 2009 at least) the official communications/interoperability meeting for the pseudo-official “smart grid” community in the USA, namely DoE/GridWise and NIST/SmartGrid.

MicroQoSCORBA

Rather than just stripping a CORBA (distributed object) ORB down to the bare minimum (for small embedded devices, enabling them to be IoT), as others had (getting the binary size down to 5K RAM) we made it highly configurable (think of “macro hell” for our implementers). If you only sent but did not receive messages, then the code to receive messages is not compiled in. Same for different QoS properties. Lockheed Martin’s Advanced Technologies laboratory, which looks for and evaluates technology, installed it and compared it to other “small CORBA” implementations (this was led by Guatam Thaker). Ours was the fastest they tested, and also actually had less jitter than some (non-small) real-time ORBs. (In retrospect, that makes sense: if you strip out a lot of code in the execution paths in the binary, you’ll get more CPU cache hits.)  Guatam configured it to use shared memory to communicate between client and server, which removed network variability.  He found that a CORBA call up and down the stack added 50 microseconds over a direct local call. And this was circa 2002: it would be in single digits of microseconds now, thanks to Moore’s Law.

Anyone can afford a few more microseconds to call remotely, and even moreso with all the benefits that middleware provides (see a summary of them in the encyclopedia article I wrote on middleware and was overviewed above).

Other Background Info

My CV.  Pay particular attention as to how many invited presentations I’ve given to either power audiences or mixed power-IT audiences; these are listed in the last few pages.

I have been on the Board of Directors (and Chief Distributed Systems Architect) for a cyber-security startup, TriGeo Network Security, which was eventually bought by SolarWinds. I learned a lot about financing and management of startups, but even more about human nature (I got (legally) screwed out of a lot of stock options). C’est la vie & live and learn.

My Business Card (Front and Back)

A screenshot of a website

Description automatically generated

Yes, I actually hand out the consulting side with Dogbert on it. This is a useful filtering: if someone who reads it does not want to explore hiring me, then I don’t want to work for them.  This saves us both time. I had the research director for a large (American) utility tell me that this was the best business card he’d ever seen!

It is, of course, pointing out that the leader of any one-person consultancy has multiple roles.  And, also, utilizing my dictum: “Life is too short to not make fun of it.” And also in line with the dictum of that great, spinach-loving naval philosopher, Popeye: I yam who I yam.

I am cross-culturally sensitive, however. In Germany, the Prussian ethos—discipline and seriousness—dictates that I warn them (with a straight face but, of course, “tongue in cheek”) that, if this card is too serious and/or formal for Teutonic sensibilities, then I can make it more informal.  It takes them 5–10 seconds of being puzzled to realize that I am not serious here, but none have fired me, or even seem to have been offended, yet. But, then, perhaps Prussian ethos also involves savoir vivre, or at least good acting!

Bonus: An Epic April Fool’s Joke

If you can spare 15 minutes to laugh your tail off, check out this April Fool’s joke I inflicted on my former advisor in 1992.  It became famous in the department and was still being talked about 30+ years later. I was asked to write this up for the professor’s 2010 retirement roast.

 



[1] Ryan Goodfellow, Robert Braden, Terry Benzel, and David Bakken. “First Steps Toward Scientific Cyber-Security Experimentation in Wide-Area Cyber-Physical Systems”, in Proceedings of the Eighth Annual Cyber Security and Information Intelligence Research Workshop, ACM, oak Ridge, TN, January 2013. Email Bakken for a copy if you can’t find it online.

[2] The next generation version of DETERLab is SPHERE, by the same organization (USC’s Information Sciences Institute (USC-ISI): https://sphere-project.net/