Well, it looks like The Quantum Observer has fully become a management consulting blog. Unfortunately, I am a creature of impulse and can only write about things that are top of mind for me. Since I am currently dealing with the consequences of my own failure to create robust systems for running my team, I figured I would try to help some of you avoid my grim fate.
In a previous post, we discussed Hiring, the first system your employees will encounter during their time in your organization. While important, it is not the last word on systems building, only the foundation on which you can begin constructing an effective team1. If you have recently left the familiar comfort of the IC2 world for the uncertain terrain of leadership, it’s worth considering what you’ve got and how (or whether!) you should make it better.
In your IC role, you’ve certainly encountered and worked within3 most of the systems I’m about to list. Sometimes you’d even try to improve them, but the overall health of these systems was not your primary concern. As a lead, you will come to rely on the integrity and quality of these business systems, and come to appreciate how much they contribute to the long term health of not only your local team(s), but the entire company as a whole.
Hiring
Onboarding
Work Tracking
Feedback
Info Dissemination
I should say that I am not sure what the optimal way to set up such systems is. It’s unlikely that there’s one perfect way to do this that fits all use cases, so the best I can do is imagine what I want these things to do, then try to figure out how to make that happen4.
Onboarding
Simply put: how do you take Dr. Schmuckatelli from inexperienced newb status to productive team member in the shortest amount of time? One great way is to preferentially hire Dr. Schmuckatellis who already know what they’re doing and are experienced with software and hardware that you use5. As we saw in the Hiring post, this is not easy, so it’s likely that most of your new hires will not come preloaded with exactly the right knowledge and skills. As a result, you must create a system by which new hire can find all of the information and tutorials they need to learn what the heck is going on with a minimum of time wasted or trivial questions asked.
There’s also the administrative side of this system, which mostly encompasses credentials, benefits, pay, taxes, etc. In my mind, this is mostly the focus of the HR department, but if you’re, say, the CEO of a nascent QC startup, hiring and tasking the head of HR is up to you!
HR might be able to help on the technical side of onboarding by providing templates, best practices, etc, but they just won’t be able to synthesize the relevant technical documentation. Thus, it falls to the technical staff to do this, the most painful, but arguably most important, of all tasks.
The bane of all onboarding experiences is idle time. I’d say for the first 90 or so days, your new hires should never not know what to do. Note that simply assigning a bunch of technical reading in the form of journal articles, theses, internal reports, etc is not optimal. Most people without practical experience in particular field will find it very hard to extract durable knowledge from just reading, even if they are technically competent enough to follow the derivations. There is simply too much implicit knowledge hiding behind the literature. That’s why I’m starting to experiment with is keeping a backlog of tutorials/problems that range in difficulty from Toy Problem —> Real Problem. These problems should complement and reinforce the basic literature curriculum required to be a competent team member. Successfully solving these problems should require insights into the unstated or obliquely referenced assumptions that go into any technical task.
This approach is easiest to implement for digital tasks, where the hardware required is ubiquitous (personal computers) and the software can be trivially acquired. Of course, this same approach should work great for hands-on tasks, but obviously requires a much greater investment in time, space, and capital. In superconducting circuits land, the size, expense, and lead-time for dilution refrigerators makes it difficult to justify having one6 fridge devoted to training/development7. This is kind of what graduate school is for, after all. However, if you’re hiring undergraduates to work on experiments, where would you prefer they learn? What would you prefer they break? A practice rig or the real thing?
Work Tracking
One embarrassing thing that happens to team leads all the time is having to answer “Uh, I’m not sure” when replying to inquiries about that status of a project you are ostensibly overseeing. The better answer is “As of our last check-in the status was X, but we can look at the GitLab ticket to see if there have been updates since that time. If not, I can get back to you on date Y after our 1:1/team-meeting/weekly update/whatever”. Some immaculately organized people, with excellent recall and perfect time-management might be able to keep the current status of every project their team is tackling in their brains, as well as any blockers or decision points that might be coming up. These mythical humans can probably run a team of… dozens? without resorting to digital aids. I am not such a person, so my personal cognitive load tops out at about 8, assuming each of those 8 is a highly motivated, intelligent, independent person who can make most decisions without my input.
The other side of work tracking is work planning. This ties into the Vision that you have for where the team is headed and what work is most important and impactful both now and in the foreseeable future. Tracking and planning is what gives you the justification and imperative to ask for8 more resources from the Powers That Be.
Tracking and planning for tightly scoped, well-defined tasks is, if not easy, then at least conceptually straight forward. A multitude of what I guess I’d call business systems have sprung up to help manage these tasks. You’ve probably heard of them: Jira, Agile, GitHub, Kanban, Scrum, etc.
For example, building out lab space is complex, but well-defined task. It has an extraordinary quantity of subtasks and related activities, but you know what the end state looks like. If you want a working dilution refrigerator in a year9, you’ll order now. You also need wires, filtering, attenuation, instrumentation, all appropriate to your intended use case.
What about planning for things like:
Achieve 99.99% 2Q gate fidelities
Achieve ms qubit lifetimes10
Achieve ms qubit coherence times
Demonstrate a logical qubit
In the case of the dil fridge, we know why we don’t have a dil fridge and how to ameliorate that. In the case of the more research oriented topics like the ones I listed above, the solutions are not as clear. They have interlocking, iterating modeling-design-fab-test workflows that themselves are in various states of well-definedness.
The working tracking for any of the tasks above will start with concrete steps: verify dil-fridge wiring, load experiment, cool down, verify I/O at base temperature, tune-up devices. After the tune-up there is just “Achieve Milestone X”. How should that task be tracked? What is a reasonable amount of time to assign to it? Will other, higher priority parts need to be measured in Y weeks?
From my perspective, planning and tracking ambiguous, difficult tasks like this essentially boils down to having a series of GO/NOGO criteria. The idea is to understand, as soon as possible, whether the possible gain from the parts you’ve cooled will outweigh the time investment to see these gains. As soon as it is clear that the benefits don’t outweigh the costs, the parts should be warmed and replaced with a new chip.
For some tasks this, can take quite a while. This usually happens in complex experiments where many different devices needed to be tuned and re-tuned. In the case of gate fidelity measurements, understanding and characterizing sources of error provides important information about how many 9s of fidelity are feasible.
So the question for the team lead, is “what are the stopping criteria for this experiment?” As always, this is a judgement call. Your answer should depend on a deep understanding of previous work, the details of the fabrication process used to create the parts, and the desired results, to name a few.
Previous Work: Is this a totally novel device design with one or more untested elements? If yes, you should have reasonably lax stopping criteria if the data are looking funny. Ideally, you would also have an excellent grasp of the theory undergirding the expected device characteristics so you can decide whether the data are ‘good funny’ or ‘bad funny’.
Fabrication: Were these parts fabricated in a well-developed and well characterized process whose output has been consistent? If, say, these are standard transmons, but your foundry is trying out some new recipe for chocolate-glazed tantalum, then you are going to want to be lax with the stopping criteria again.
Results: What are you trying to do here? Figuring out whether you can possibly get ms of qubit lifetime is a lot more straightforward than tuning up multiple simultaneous 2 qubit gates to attempt some kind of error correction experiment. Heck, it’s much easier than tuning up one set of 2 qubit gates in order to get a record gate fidelity! Complex experiments with many moving parts require that you characterize each of those parts and figure out whether the sum can reasonably get you where you’re going.
So what happens if you’re measuring totally novel devices, in a new fabrication flow, trying to execute a complex, multi-part experiment? Nothing, because you have betrayed yourself. Don’t do that.
Another useful aspect of work tracking systems, even for R&D projects, is to concretely indicate when any single task has taken too much time. Without a formal system, it’s easy to lose track of how many days might have been squandered on steps that you would have expected to be simple (like preliminary calibration). Seeing that no progress has been made for days or weeks is a simple, straightforward signal that this task needs more attention. There may be some unforeseen problem that could be addressed by some intervention from a more senior team-member, or it could be an indication of a wider, deeper issue that might trigger an experiment abort.
The last thing I’ll say is that, whatever work tracking and planning system you use, you will probably want to put in estimates for how long subtasks are expected to take. Many people (yourself included) might be hugely uncomfortable trying to make these estimates, especially for things that have never been done before. This is normal, and you should emphasize that there will be no penalties for being wrong. Part of the output of a work-tracking system is a better estimate for how long new/unusual classes of activity take. In principle, a few cycles of this, maybe lasting a year, should leave you in a much better position to accurately gauge how much work you and your team can actually get done in a week/month/year.
Feedback
During and after the work you and your team are executing, you will have to give feedback. Feedback to your team about their performance, and feedback to related teams that have given you inputs for your work.
Performance Feedback
In a functional, high performing organization, essentially everyone craves actionable feedback. It’s nice to hear “you’re doing great”, but most of us a self-aware enough to have begun to form doubts regarding our personal infallibility. More practically, if you receive performance feedback that suggests you’re doing a great job, but don’t receive commensurate raises, bonuses, or promotions, the real feedback is that you’re doing OK. Then you’re faced with two questions: how do I do better? how do I get anyone to actually tell me what they want. I made an attempt to sketch an answer to these questions for early career BS holders in an earlier post. In general the same rules apply to more senior roles, but things get pretty interesting when you advance far enough to be responsible both for long-term goal setting and overseeing the execution. That’s a story for another post, though. Today I wanted to take a look at giving feedback and trying to make it easier on yourself.
A good feedback system is going to ingest the output of a good work tracking system. Because you’ve already scoped out the work that needs to be done, its requirements, and defined the deliverable, you have a ready-made rubric for evaluating performance! It’s not the whole story, but it’s a great foundation to build upon.
The details above will give you a good sense about who gets things done in a reasonable amount of time and to a reasonably good standard. You still might want to know who is demonstrating good judgement and able to do so relatively independently. One useful tool is to hold regular ‘deep dives’ into ongoing work. These detailed presentations not only serve as a useful form of peer review11, but also give you a glimpse into how different members of your team approach their work, the intermediate quality of their work, and whether they actually know what the heck they’re doing without relying on more experienced members of the team. Deep dives also act as useful intermediate deadlines to maintain accountability and motivation. They’re also great for you, as the team lead, because these presentations also form the foundation of eventual reports to stakeholders and other interested parties. I love ‘em, just don’t forget to take notes for later!
If you pay attention to your work tracking system, and you take good notes during your team deep-dives, you should be in a good place to give detailed, actionable feedback to everyone on your team. You should also be able to demonstrate detailed knowledge of their accomplishments over the last <N> months, which is the foundation of your credibility as a feedback giver.
Inter-team Feedback
The other kind of feedback you may need to give is to other teams that are important to your work. For example, if you are measuring some novel device and you need some theory support, but the theorists send you inscrutable LaTeX files with everything in ‘theorist units’ you might politely request something more friendly to a simple lab creature. Alternatively, you may have concluded important measurements about the characteristics of your systems. Things device designers and theorists might really want to know to improve their designs, or bring their models closer to the reality of the lab. A failure to communicate this information weakens your whole organization, and might result in substantial wasted work while the information percolates through side channels.
You could imagine a semi-automated system that coordinates all of this by linking together tickets/nodes/whatever in disparate work tracking systems according to their relevance. Realistically, accomplishing something like that would require that every team lead have a fairly granular understanding not only of their own team’s workload, but also a similarly fine understanding of other teams’ workloads as well. It’s not impossible to do this and I’m sure some orgs12 are able to achieve this dream. The main way this is usually handled is by having lots of meetings, which sucks13. Most meetings often don’t feel like work because they are filled with pointless bullshit around some minor nuggets of good information14. Keep in mind that each meeting is costing you money. A 10 person meeting with 6 minutes of bullshit just cost the company one person-hour of cash. A totally useless 10 person meeting that runs half an hour just burned 5 person-hours of cash! My advice to you, if you go the meetings route, is to have a detailed agenda, make sure speakers are prepared in advance, and ruthlessly shut down off-topic or redundant lines of conversation15.
Information Dissemination
Sometimes you may be called upon to present about the work of your team to some important external audience. Execs, visitors, classes of new hires, etc. If you have a good system for collecting important metrics from your work tracking system, this should be easy. The internal deep-dives your team regularly does should provide a foundation of good slides/figures about the details of the work you all do, while the metrics you track tell the larger story about the success of your team. Bam, done.
Unless, of course, you don’t track any metrics. Or you don’t know which ones are important, so you didn’t even record the relevant data. Then you’re fucked. So, I urge you to think hard about how success should be defined/measured for your team, just as you think hard about what defines success for each project your team undertakes. Once you have figured out what these metrics should be, start tracking them16.
I don’t have that much else to say here, except you should periodically check with your stakeholders/bosses/Powers That Be to make sure your concept of success is still aligned with their wants and needs.
Conclusion
My final piece of advice is that you should remember that the creation and maintenance of these systems is a constant, ongoing process. It’s easy to get complacent when it looks like things are going well, but that is when you need to be looking ahead to what the landscape will be like 3, 6, or 12 (or more!) months down the line. Onboarding materials will need to be updated, long-term goals re-assessed in light of new understanding, people might ‘forget’ to update documentation, priorities from Above might change, half your team might leave to start their own QC company, who knows! Complacency is death17.
Our line of work can be highly ambiguous, fluid, and difficult. Things you know for sure one day might be fully undermined the next after some new data are collected, or an important pre-print appears on the arXiv. Just doing the science, from theory to design to fab to experiment is extraordinarily challenging, requiring the labor of hundreds (thousands?) of physicists, engineers, technicians, and more. As you graduate from an IC role to a team lead role you will find even more tasks on your plate that are very much not the technical work for which you were likely promoted. You’ll be tracking multiple different projects, managing deadlines, managing changing requirements, giving feedback (professional and inter-departmental), and reporting to VIPs. You can try handling these spinning plates on an ad-hoc basis, but you’ll soon find that your team’s effectiveness faces a bottleneck (you). By systematizing as many of these tasks as possible in an intelligent way, you will save yourself and the people around you a tremendous amount of trouble and heartache.
Or team of teams, or team of teams of teams, etc
Individual Contributor
or around
Protip: It helps to hire or steal someone that is good at such things and can complement your technical skill/understanding. Ideally this person has a strong technical background as well, but should also love spreadsheets, Gantt charts, and placing purchase orders. If you are such a person, let me know if you want a job.
An experienced hire still needs to be onboarded to your way of doing things, though. They need to learn all of the implicit knowledge required to function in your org. This is really hard to formalize. You practically need an ethnographer to embed with your team/company and write down literally everything that people do.
or more!
The way I imagine it, these fridges would still be doing useful experiments, but with chips fabricated in known good/reliable processes with relatively proven instrumentation configurations. This leaves the door open for educational debugging, but also allows trainees to expect things to work at a certain level.
Demand?
I’m just guessing about lead times here. Please don’t tell me it’s longer.
Yeah, yeah fluxonium + whatever IBM did to their transmons has fixed this one. Outside of IBM, this continues to be difficult.
I think of these as mini-dissertation defenses, but more collegial and collaborative.
Small ones, maybe?
The sad truth is that I, and many team-leads I know, spend most of our time devoted to inter-team communication, trying to ensure that our teams get the inputs they need to keep working productively.
This meeting could have been an email, etc etc
As usual, this is less black-and-white than I make it seem. There are times when ‘off-topic’ conversations are actually important and fruitful. It is up to you to demonstrate good judgement in determining which digressions are which.
Just don’t get trapped by myopia here. It’s shockingly easy to start optimizing for some spreadsheet number at the cost of true performance or long-term organizational health. This is also why you should think very, very carefully about what to measure.
I say this from bitter experience.