GDS Workshop Blog

LD Debates as Risk Assessment & Management

Mar 08, 2019
|
Global Debate Symposium Articles

 

LD Debates as Risk Assessment & Management

 

Bailey C. Rung

Ridge High School – LD & CX Coach

Global Debate Symposium – Lincoln-Douglas Debate Instructor

 

 

I.               Overview

 

 “It is not certain that everything is uncertain.”

 -- Blaise Pascal

“Uncertainty is a quality to be cherished, therefore – if not for it, who would dare to undertake anything?” 

-- Auguste de Villiers de L'Isle-Adam

 “Always Be Weighing.”

 -- Bailey C. Rung

 

Uncertainty is an endemic feature of thought at all levels; in any situation, the beliefs we form and the choices we make are governed by calculations that strives towards elimination of doubt. Yet few of us, it seems, can ever really be 100% confident of anything. The future is not written in stone. Perception of present reality is shaky at best. Most can’t even agree on fundamental history. I can’t remember what I had for lunch yesterday, nor can I tell you what it will be tomorrow. Nevertheless, we do our best to approximate – estimating the overall probability of something and its potential significance when forming beliefs and actions. And once we formulate beliefs about risk, we act on them. In fact, one’s relationship with uncertainty appears to be one of the few self-evident, universal phenomenon of experience.

What I’ve just described is risk assessment & management (evaluating and responding to risk), and I’ve come to see it as endemic to cognitive explanations for belief and action. Particularly so in Lincoln-Douglas debate. Participants are given a background knowledge of philosophy, a few weeks of topic preparation and up to seventeen 60-minute blocks in a weekend to do the following: establish a systemic interpretation of the world, defend an ethic from it, propose a pursuant course of action, interact this position with another’s position/positions, and decide the superior advocacy. At every turn, students and educators inevitably wrestle with the question of uncertainty in how they should argumentatively and performatively contribute to the resolutional dialogue - under fairly tight conditions. The answer to this question always appears to be some form of risk assessment & management in making the case for a debater’s advocacy.

 

The goal of this article is for readers to come to treat debates, particularly LD, as risk assessment & management (RA/M). First, I will justify the appropriateness of RA/M and propose a model for argumentation and evaluation – as well as offer clarification and defences. This composes the bulk of the article. Second, I will establish paradigm issues relevant to this model. Third, I will offer practical recommendations to debaters. I will also include a post-script defending evaluative modesty and its use in this article. Cited works are included at the bottom.

 

My hope is that practitioners will observe that risk assessment & management inevitably governs decision-making, and will thus approach debates accordingly.

 

My inspiration in writing this article, academically speaking, comes from my various studies in cognitive research, systems thinking & organizational management, social & political science, and argumentation & discourse theory. Including a variety of different yet inter-linked fields of study will hopefully provide a fresh and exciting insight to debate, and LD in particular. There is real value in going beyond traditional philosophy and normative practices in thinking about debate.

 

Thank you to John Corum, Elaine Huang, Charles Karcher, Shankar Krishnan, Bob Overing, Tyler Prochazka, Alex Rivera, and John Scoggin for lending their time and energy in providing me feedback for this article. Thank you to Aaron Timmons for working with me in putting this article out. I also want to express appreciation to Megan Koch of Illinois State University, Dr. Jessica Furgerson of University of Cincinnati – Blue Ash, Chad Meadows of Western Kentucky University, Dr. Eric Morris of Missouri State University, and Sue Peterson of Cal. State University – Chico who have agreed to provide feedback as I work on this concept going forward. Each of you have provided me with valuable insight and inspiration.

 

II.             Towards Risk Assessment & Management

My central argument is that competitors should frame their advocacies through risk assessment & management (RA/M), and that judges should decide the winner based on the more plausible RA/M calculus. My contention is NOT that we should eliminate framework arguments that use philosophical theories to filter substantive offensive, NOR is my contention exclusionary to performance or theory arguments. Rather, RA/M should be the underlying lens by which all arguments are considered. Additionally, while I will defend the desirability of this interpretation as well as generate an applicable model, my primary justification for appropriateness rests primarily on the inevitability and reliability of RA/M in debate. I contend that all argument is comparative, and that comparison is best undertaken through my model.

Ultimately my analysis will conclude by deriving and outlining my model from it, by I will also include it here as a clarifying preview:

  • Competitors and judges should accept that debate occurs under a backdrop of uncertainty,
  • Competitors and judges should adopt a risk assessment & management approach to argumentation and evaluation, respectively;

Thus,

  • The strength of all arguments’ warrants and impacts should be determined by the level of risk that they are true,
  • The level of risk that an argument’s warrants and impacts are true should be determined by the likelihood and severity established by the data supporting the argument. Timeframe and sequencing factors should be considered alongside likelihood and severity in determining risk,
  • The comparison of all arguments should be determined by balancing their respective levels of risk,
  • The balance of arguments based on the level risk they are true should be determined by weighing in favor of the argument with a higher level of risk,
  • The superior position should be determined by a favorable balance of arguments based on risk;

Thus,

  • The judge should award a win to the competitor advocating for the superior position as determined by reasonable consideration of risk.

In lieu of hypothesizing an entire debate to provide to demonstrate my model’s use, I have included a number of examples based on things that happen in real rounds. I also want to emphasize that RA/M is a heuristic intended for contextualization, so I hesitate to draw definite conclusions about how debates would play out and be decided based on a single potential scenario.   

 

To begin to understand risk assessment & management as well as its appropriateness, let’s start by characterizing an LD debate round: an adult judge must choose which side did the better debating by determining which student competitor best interprets and answers the question posed by the resolution. Putting aside questions of speech-times, evidence norms, etc., debate is a game of proposing and evaluating arguments based on their truth (I use the term ‘truth’ here in a relative, subjective sense – the argument’s level of plausibility on context of the round. First, one does not necessarily need to test the capital-T ‘Truth’ of the resolution to win the debate – just whichever position is most persuasive to the judge. Second, we’re not entirely sure if the capital-T ‘Truth’ Exists – so again, truth is best treated as contextual, especially in such a short round). The resolution provides context for the game.

Nearly all LD resolutions in some way share three things in common: (1) a subject – things that are assigned moral status, (2) a predicate – things certain agents morally or normatively ought do, and (3) an object – a system that said moral things belong to. Take for instance the 2019 March-April topic Resolved: The illegal use of drugs ought to be treated as a matter of public health, not of criminal justice. Here, the subject is illegal use of drugs, the predicate is ought to be treated and potentially Resolved:, and the objects are public health and criminal justice. Here there is no actor specified to the ought. Put another way, all resolutions ask whether a specific ought applies to a particular thing in a given system. Competitors win the debate by convincing the judge that their reasoning about oughts concerning things in a system is more plausible in comparison to their opponent’s reasoning from a backdrop of uncertainty.

When one examines the nature of ‘Oughts’, ‘Things’, and ‘Systems’, the necessity of RA/M begins to emerge.

First, the strength of an ought is based on confidence in our reasons for it. ‘Oughts’ are best explained through a cognitivist understanding of judgments. I make this claim because the study of cognition is robust and has applications in variety of philosophical, political, social, and scientific fields. It can at least be made to accommodate anything from speculative claims about materialism, metaphysics, and ontics to normative ethics as well as non-ideal theories. T. M. Scanlon (2003) explains in his revolutionary paper Metaphysics and Morals that when an agent judges they have conclusive reasons to believe something they must form an intent to act in accordance with that belief. This is constitutively true – morally sound judgement by its nature requires that our thoughts and actions be consistent with good reasoning. Given our judgements are about what a reasoning person would do in familiar situations, there can be weight behind prescription of oughts. Of course, determining what is a good reason necessitates minimizing doubts that would otherwise detract from that reason. Any ethical framework’s strength is then proportional to the confidence put it in by the judge based on the veracity of the arguments for it given by the debater.

Second, the moral perception of things depends on confidence in the empirical data describing it. ‘Things’ are the objects of which we have moral disputes about. Scanlon writes that scientifically and metaphysically speaking, things like illegal use of drugs do not possess innate moral features – to say otherwise is unverifiable. Instead they are empirical facts of the world that we react to in moral ways, so we assign them moral status (We’ve already established how this process of recognition is refined by deliberation about the force of reasoning behind the moral way in which we recognized said thing). What data we consider empirical fact then is likewise filtered by the level of uncertainty we have in our reasons for believing so. The judge then, is always evaluating the substantive moral evils/goods that things like drug use generate based on the level of confidence they have in the data saying so.

Third, ‘Systems’ are innately complex and our understanding of them is tempered by the firmness of our conceptual grasp. Michael C. Jackson’s (2003) landmark work Systems Thinking: Creative Holism for Managers, defines a system as “a complex whole the function of which depends on its parts and the interactions between those parts” [3] and includes examples ranging from industrial manufacturing to philosophy itself. Systems are diverse and dynamic at all levels. Jackson, citing Hegel, explains the process of understanding these systems as an unveiling of knowledge based on constant revision of our beliefs about them. The systems in which we debate oughts and things are nuanced and sophisticated – appreciating that requires judges to ask serious questions about how confident we are in debaters’ interpretation of reality. Establishing a unifying, structural claim about why and how we act as individuals is much simpler task than doing so for the entire Criminal Justice System.

In all cases, the terms provided by a resolution – ‘Oughts’, ‘Things’, and ‘Systems’ – ask debaters to propose arguments and judges to evaluate them from the backdrop of uncertainty. While this might seem like basic review to some, I find it necessary for two reasons: (i) scholarly consideration of these terms reveals nuances that hint at the role of RA/M as a response to uncertainty in engaging debate, and (ii) it provides a context in which to justify RA/M and develop a debate-specific model.

 

The response to uncertainty must be, and already largely is, risk assessment & management. Management advisor Nihad Nakaš (2017) writes:

“Uncertainty pervades our existence and is virtually impossible to rule out from anything that we do. Uncertainty also triggers risks, those possible events in the future that may adversely affect what we are trying to achieve. It therefore follows that risk and risk management are an inseparable part of our everyday experience. From the time we wake up, throughout hundreds of little decisions made each day until the time we turn off the lights at night, we consciously (and sometimes even unconsciously) manage hundreds of risks. By default, this makes each of us a risk management machine. If it were not for effective risk management, our personal and professional performance in terms of managing our resources, planned deadlines and achievement of our objectives would be at stake. Yet, when we translate risk management concepts to organizational setting, and in public sector organizations in particular, we often see that important risks get overlooked or managed improperly.”

It's easy to see how this translates to argumentation and evaluation in LD, especially considering my previous analysis of what the resolution asks. Debaters advocate for taking particular actions (or maintaining the status quo – not to mention theory and performance issues) under a backdrop of uncertainty by using risk management: estimating the likelihood and severity of events that would either motivate or interfere with said action. The debater’s presentation of this argument to the judge is then evaluated in comparison to that of another, with the judge ultimately deciding the victor based on their own risk management: whichever debater’s overall advocacy has a higher plausibility. This entire process is what we otherwise refer to as ‘preponderance of evidence’. This demonstrates that that idea of all argument being comparative and the process of risk assessment & management are one in the same. As Nakaš further notes, proper risk management requires quality information and its responsible estimation – and also that the particular form of risk management in question be suited to its context. This is my primary argument in justifying an RA/M approach to debates: we are already doing it, so let’s formalize it. Nakaš generally outlines risk management as:

“[I]dentifying risks to our objectives, understanding where they are coming from (causes) and what may happen if they materialize (effects), assessing their impact and likelihood, prioritizing the critical ones and determining a reasonable response, all the while monitoring its appropriateness and adjusting as necessary.”

Let’s break that down in LD and CX terms: First, Advantages/Disadvantages/Turns/Violations-Standards/Criticisms fall under the category of ‘Causes and Effects’. Second, Plans/Counterplans/Status Quo/Interpretations/Alternatives fall under the category of ‘Reasonable and Appropriate Response’. Third, criterions such Frameworks/V-VCs/Voters/ROBs are how we engage in ‘Prioritization’ of which overall advocacy to vote for. ‘Assessment’ is the final category. While Nakaš sees risk assessment proper as only a step between causes and effects and prioritization, I argue that risk assessment should be how we evaluate every step of risk management – not just causes and effects. After all, debaters may agree on a cause-effect relationship such as warming-extinction, but disagree on the response or framework. Responses such as Bolshevism or Culture Jamming are likewise evaluated on their potential to be adopted and utilized, among other metrics of risk. Even ethical frameworks that issue prioritization ‘imperatives’, such as rationality or utility, are themselves estimate and compared through their likelihood to be true and their potential severity. In real world practice, risk assessment contributes to management decisions at every level. Jeff Copeland (2017), a manager at RiskLens, writes:

The broader risk assessment process typically includes: Identification of the issues that contribute to risk, Analyzing their significance…, Identifying options for dealing with the risk issue, Determining which option is likely to be the best fit…, and Communicating results and recommendations to decision-makers.

We can successfully compare this to Nakaš’ conceptualization of risk management: First, ‘Identification & Analysis’ is used to isolate causes & effects. ‘Options’ are those presented to considered in formulating a reasonable & appropriate response. ‘Determination’ of the best option is done in under the prioritization of criterions. This cements my argument that assessment & management go hand-in-hand. The only distinction is that assessment is how we come to a belief, and management is how we behave pursuant to that belief. It is not a coincidence that the process of coming to a belief about risk necessarily informs the actions we take in response to risk – it is a precise mirror of Scanlon’s argument for a cognitivist understanding of judgement. Our beliefs about risks are informed by reasoning over likelihood and severity, and sound judgement relies on having good reasons for those beliefs and acting on them. Ultimately the best reasons are those that have the highest risk of being true. As such, the idea that debaters advance their arguments from a RA/M standpoint, and that judges evaluate these arguments based on their own RA/M is sound. From this, I conclude a RA/M lens is appropriate for LD Debate in that risk is constitutive of understanding, explicating, and evaluating arguments.

 

RA/M’s desirability can be expressed in two ways: First, by expanding on the concept of evaluative modesty and second, by evaluating practical applications of RA/M thinking.

Initially, debate coach & theorist Bob Overing’s (2015) influential thoughts on evaluative modesty are compatible with RAM thinking – this then compels us to apply evaluative modesty to all levels of debate. (Elsewhere, evaluative modesty is sometimes known as ‘hedging’, ‘epistemic modesty’, or ‘hedging’, depending on context). Overing, primarily concerned about rounds in which competitors/judges put total confidence in a single Role of the Ballot and use it alone to evaluate substantive arguments, writes:

“A ‘modest’ way is to consider all the relevant moral systems, one’s credence in each of them, and the magnitude of the possible outcomes according to those theories.”

This is justified given Overing’s summation of the moral philosophies of Ross and Sepielli, which reads:

“They have found that when one is uncertain, the best principles of rational decision-making and our basic intuitions tell us to consider more than one moral theory.”

Two observations can be made that link evaluative modesty to RA/M: (i) Overing correctly treats problems of comparison as a question of uncertainty, and that our rational decision-making response should be comparison through probability and significance – characteristic of RA/M thinking and language, and (ii) Overing’s characterization of ROB debates are similar to other issues of comparison at all levels of debate. (i) appears self-evident, so I want to touch on (ii) briefly: one could surely say that using the buzzword of nuclear war to categorically rule out consideration of every-day violence is fundamentally the same issue as using policymaking frameworks to totally occlude consideration of ontological antiblackness. While the moral elements are different, we are left with two impacts that really ought to be considered through risk. How plausible each impact is depends on its likelihood or severity, and how significant it is depends on its risk relative to that of another impact. Similarly, we would surely say that we should evaluate a counterplan’s solvency against the that of the affirmative’s. So if we accept (i) as true (that evaluative modesty implies RA/M), and have good reason to believe (ii) (that evaluative modesty applies to nearly every level of comparison), then the benefits of using RA/M accrue the same theoretical benefits as an evaluatively modest calculus and expands them.

Overing identifies a three primary benefits to an evaluatively modest approach.

First, “it balances the benefits of policy-based education and critical education derived from continental philosophy, critical race theory, rhetoric, etc.”. Extending modesty through use of RA/M at all levels encourages meaningful education about a variety of impacts, plans, etc. RA/M foregrounds modesty and as such provides an incentive for debaters to better defend and interact their arguments - rather than simply ‘layering up’ at the first sign of contestation.

Second, it maintains an equitable conversation: both parties are given a chance for effective counterword. Using RA/M foregrounded by evaluative modesty maintains a fair dialogue, preventing debaters and judges casting away various arguments at first glance. This equitable conversation is valuable in-and-of itself, since it’s the constitutive function of debate and the means to arrive at the most plausible truth.

The third “advantage of modesty is to return ROBs to the realm of normal argument evaluation”. For Overing, debate on all subjects should be comparison considering levels of uncertainty – ROBs should not be an exception. This contention cements the case for RA/M as an expansion upon evaluative modesty. More importantly, it underscores the purpose of this article: we need a formal model, because often judges and students do not abide by ‘normal argument evaluation’ since there isn’t really clear agreement on what that means. We have a tough time explaining why even under the same framework, some impacts are removed entirely from calculus merely due to the presence of marginal defense. We also have a tough time explaining why an alternative solves without explaining why individuals are likely to enact it or how significant it will be in combatting a power structure. These are just a few examples, but they reveal a need for evaluative modesty to go beyond framework and be incorporated into RA/M. It’s necessary for substantive education and equitable dialogue from a theoretically reliable standpoint.

Next, a review of RA/M as it is applied in the real world demonstrates its practical value out-of-round.

First, RA/M builds relevant political and legal skills. Law professor D. Don Welch (2014) writes:

“Once we think we know something about the probabilities of the outcomes of various scenarios, decisions must be made about how to incorporate those findings into the public policy process.” [73]

He follows-up, saying:

“Such questions are best answered not in the abstract, but should be tailored to concrete cases. Evidence-based policy making has become the new touchstone for good governance.” [73]

Welch is essentially explaining that for all of its theoretical worth, RA/M has fittingly become the predominant framework for policy-relevant decision-making. Students looking to work in public administration will need to communicate and think in the RA/M language in order to succeed and enact real change.

Second, RA/M is a vital component of critical projects. The late critical sociologist and philosopher Zygmunt Bauman (2010) wrote:

“There is…no single, unambiguous route leading from the second enlightenment stage, to the third—that of practical action aimed at adjusting social reality to the newly accepted set of meanings. It is on this decisive threshhold where courage and the decision to take risk become indispensable vehicles; and, to be sure, where the gravest and most costly mistakes can be made, more often than not confounding the very emancipatory intent of action.” [99]

Baumann strikes at the common chord of marginalized people’s theories and movements by establishing the importance of risk and its calculation in the moves we make. Tough decisions will be necessary to bring about a lived revolution: whether to choose strikes, propaganda or other means of pressuring power structures, and how much room for error and/or opportunity exists in our activities. Any critic’s praxis is then innately based on consideration of risks. Regardless of one’s particular ‘corner of the library’, RA/M thinking builds portable skills necessary to achieving their ends.

Psychology professor Gregg Henriques (2016) concludes that in the current socio-political climate, Trump’s administration and the context from which it arose has foregrounded the risk posed by uncertainty. There has never been a more crucial time for both politicians and individuals writ-large to adopt RA/M thinking – it is perhaps our only means of survival in the face of uniquely unpredictable and catastrophic violence. Overall, RA/M is desirable in that it builds practical thinking and communication skills that debaters can use in nearly any field of work or unexpected situation. This is especially the case, given RA/M’s theoretical reliability outlined above.

 

The model that I propose formalizes contemporary debate thinking with the RA/M process outlined above. I’ll begin by analyzing the normative practice of ‘impact calculus’, follow-on by solidifying applied RA/M practices, and conclude by outlining my model.

Let’s start with the familiar practice of impact calculus, the method by which the significance of arguments is usually determined in debate. The National Speech & Debate Association (NSDA) provides the following free Impact Calculus Handout that summarizes the practice:

This provides strong basis for a formalized RA/M model.

First, impact calculus is a familiar concept in accepted common usage. That makes it apt for improvement rather than moth-balling.

Second, the handout is quick to point out that an impact is why an argument matters, not just a ‘impact’ at the bottom of a disadvantage. This affirms my earlier argument that risk assessment applies to more than just evaluating the significance of the cause-effect relationships from the management perspective. Our responses (Plans, etc.) and criterion for prioritization (ROBs, etc) are also worthy of impact calculus. I emphasize this because I see it as the most common error in conceptualizing the practice as it stands.

Third, impact calculus determines risk primarily by evaluating ‘Probability’ and ‘Magnitude’. This is consistent with RA/M language of likelihood and severity (the terms are synonymous, but for the sake of consistency I will stick to the formal RA/M language). I have already established why this formula is the appropriate response to uncertainty.

Before I go further, I would like to resolve issues relating to secondary classes of impact calculus. The handout identifies ‘Timeframe’ and ‘Turns Case’. Elsewhere, ‘Reversibility’ and various forms of ‘Sequencing’ appear. Timeframe is really a sort of tie-breaker between likelihood and severity, insofar as time provide the metric for manifestation of the impact. Suffering happening now is 100% probable. As Complexity Theory contends, complexity ensures the farther-out the forecast, the less chance it will actually occur. That said, Overing points out that time also implicates how we might under-valuate/over-valuate an something’s severity. Reversibility is a special category of severity that also concerns complexity. Splinters aren’t usually severe, because we easily can pull them out. Catastrophic nuclear meltdown is a much different matter. Sequencing is the most distinct, referring to the concept of one impact being a prerequisite to or somehow influencing another. This is sometimes also known as Turns Case, Shortcircuiting, or Inclusivity. Really, they all say the same thing as sequencing. While sequencing is obviously influenced by the likelihood or severity of an impact occurring that would affect another, its distinction lies in its consideration of direct causal relationships between impacts. From this, I conclude that sequencing should operate as another distinct ‘tie-breaking’ way of assessment/management of risk – though our confidence in a sequencing claim should nevertheless be tempered by likelihood and/or severity. In all cases, this interpretation of impact calculus provides an appropriate basis for a formal model of RA/M for debate.

Now, I’ll turn my attention back to applied RA/M practices. Sound judgments in management entails proper consideration of risks to form acceptable reasons for adopting and acting on beliefs, with risk being assessed by the potential likelihood and severity of things (and of course any timeframe or sequencing factors). A generic visual representation of this process provided by TeamGantt, a RA/M consultancy group, looks like the following:

I’ve included this matrix in order to make three observations, the implications of which in turn should – and do – inform my model.

First, RA/M should not just be relegated to the impact of an argument as per the NSDA handout. Determination of risk should also consider the warrants of an argument. An argument is as plausible as our confidence in it, and that confidence is determined by the plausbility the argument is true based on the probability and magnitude of the data supporting it.

Second, once we have measured something’s severity and likelihood, we must then assign it an overall level of risk. This is rarely occurs in round. Further, once something’s risk has been assigned, it must be compared to that of another. This also rare. Most often, impact calculus as it is currently practiced almost always stops once likelihood and severity are measured, with the judge filling in the blanks based on ‘guess-timation’. Conducting proper RA/M requires all steps be taken, and this is assisted by use of a matrix - mentally or graphically represented.

Third, there is room for play in how we consider levels of likelihood and severity. Likelihood could be represented by percentages, a la the bell curve. Severity might be classified on a scale of negligible-to-catastrophic based on a variety of qualitative or quantitative metrics. Likewise, arguments can be made about what to weigh more heavily: likelihood or severity, or how we consider timeframe and sequencing claims. A debate this ‘meta’ might be ambitious if not unrealistic in every setting, but the best debates in both LD and CX at least tacitly consider these issues. A formalized RA/M model is conducive to this type of weighing. In all, valuable lessons can be gleaned from exploring RA/M as it is applied in the scholarly and professional worlds.

Finally, what would a corresponding model for RA/M look like in debate? I offer the following rubric in consideration of our discussion up to this point:

  • Competitors and judges should accept that debate occurs under a backdrop of uncertainty,
  • Competitors and judges should adopt a risk assessment & management approach to argumentation and evaluation, respectively;

Thus,

  • The strength of all arguments’ warrants and impacts should be determined by the level of risk that they are true,
  • The level of risk that an argument’s warrants and impacts are true should be determined by the likelihood and severity established by the data supporting the argument. Timeframe and sequencing factors should be considered alongside likelihood and severity in determining risk,
  • The comparison of all arguments should be determined by balancing their respective levels of risk,
  • The balance of arguments based on the level risk they are true should be determined by weighing in favor of the argument with a higher level of risk,
  • The superior position should be determined by a favorable balance of arguments based on risk;

Thus,

  • The judge should award a win to the competitor advocating for the superior position as determined by reasonable consideration of risk.

 

Follow-up comments are offered here to clarify and defend my model. (i) judges should be conducting their own risk assessment & management pursuant to evaluating competitor’s depiction of risk. This is more-or-less stated in (2),(d), and (3), as well as elsewhere in the article, but nevertheless I want to be clear. (ii) Correspondingly, competitors should modify and debate how they conceptualize levels of likelihood and severity – judges should favor the formula that itself has a higher risk of being plausible. Absent competitor input on how likelihood and severity should be characterized, judges should default to an explicitly stated formula of their choosing. (iii) The RA/M thinking and language in use by competitors and judges should be contextualized to their argumentative settings. How we qualify/quantify probabilities and magnitudes is necessarily responsive to the specific position type, whether it be a Plan/Advantage, Kritik, or Topicality Shell, etc. (iv) Correspondingly, some might object that RA/M precludes or is incompatible with certain ethics, such as deontology. I respond by asking: aren’t we still obligated to use risk in evaluating the justifications for your framework? Do we not assess violations of autonomy and dignity based on their likelihood and severity? I have a difficult time in hypothesizing a single context in which RA/M definitively cannot and/or should not be used to filter our understanding of an ethic. (v) I consider performances to ultimately be arguments – debaters who tell judges to endorse their reading of media or their presence in the room are really giving logical reasons that their pathic performance is ballot-worthy. (vii) Correspondingly, I avoid using language in my model that implies arguments must be traditional answers to the resolution. While my break-down of what constitutes a resolution certainly compels a RA/M approach to contention-framing debate, it in no way implies that performances or theory arguments are illegitimate. Winning performances, even non-topical, justify their position in relation to the resolution and depict that as a voting issue. Non-topicality-related theory issues, such as counterplan theory, implicate how dialogue under the resolution should procedurally take place. In both cases, the resolution is a relevant issue to these positions even if they don’t directly answer them in the expected way. Regardless, the only stance I take on these positions is that they should also be evaluated under an RA/M approach. (vii) Finally, some might contend that my model imposes an extra burden on competitors and judges that is unfeasible due to the time constraints for speeches and the round. I offer two responses, both under the consideration that we are already always assessing and managing risk: (initially) by explicating a conceptual or graphic matrix that determines risk would take the same amount of time and fill the same role as that of normative impact calculus and (next) line-by-line level argumentation should be explicated through the language of RA/M – each individual argument does not need an additional, distinct section of RA/M thinking attached at the end.

 

III.           Paradigm Issues

In light of the previous section, I’ll recommend a few paradigm issues that judges would do well to adopt. At the very least, “what’s the risk?” should be the last question asked before finalizing the characterization of an argument or position. While RA/M certainly bolsters the case for my propositions, I will also offer reasoning that independently justifies them

First, judges should adopt an evaluatively modest mindset at all levels of debate. Judges should think about RA/M as an extension of evaluative modesty, not a brand-new way of thinking. To restate Overing in RA/M terms, evaluative modesty states we should compare conflicting arguments on an issue with their relative plausabilities in mind, instead of being totally confident in just one argument (even if that argument is better than competing ones). Just because a nuclear war might be more important that a police militarization, doesn’t mean we wholly exclude the latter impact from consideration. Overing notes that in practice, this is true in consequentialist impact comparison, but it’s also true in debates over framework, ROBs, theory and topicality, kritiks, etc. Comparison is undertaken through RA/M. The in-round practical benefits to balanced education and equitable conversation as a result of this approach have already been established.

Fundamentally, this practice is necessary for meaningful discourse. Philosopher of logic and argumentation Trudy Govier [2018] reminds us of the centrality of the Principle of Charity in debates, writing:

“From such a conception of argumentative practice, we may derive a principle of charity, just as Grice derived his Cooperative Principle from his concept of the normal function of conversation. The principle of charity will direct the audience to interpret an arguer’s discourse in a way that will conform to the purpose of arguing and considering arguments. Such a principle will direct us to interpret the discourse of others so as to contribute to the argumentative exchange. We presume, other things being equal, that others are participating in the social practice of rational argumentation. That is, they are trying to give good reasons for claims they genuinely believe, and they are open to criticism on the merits of their beliefs and their reasoning. They are operating within the purpose of the exchange: that is, it is their purpose to communicate information, acceptable opinions and reasonable beliefs, and to provide good reasons for some of these opinions and beliefs by offering good arguments. If we make this presumption, then if there is an ambiguity in the discourse, and we can interpret it either as well reasoned or as poorly reasoned, we will opt for the more sensible interpretation. The assumption that people are trying to put forward good reasons for claims that they believe provides a basis for moderate charity in the social practice of argument and its functional prerequisites.” [122]

In no small terms, debate cannot take place if we don’t seek to understand others’ arguments and simply dismiss them outright. If we did, meaningful examination of reasons through dialogue would evaporate. This is the Principle of Charity, the foundation of which is the “nature and purpose” [122] of argumentation itself. It’s easy to see, then, why judges should be modest in how much confidence they place in argument – and ultimately decide the winner based on whose interpretation is more sensible based on assessment & management of risk. Overing asks judges to consider both sides’ frameworks when evaluating offense, and the round as a whole. The debate community asks judges to consider both sides’ impacts through ‘impact calculus’. I then ask that judges consider thing like links/link turns, conflicting data, standards, plans/CPs etc. in comparison to each other using risk. In all six cases, evaluation is best conducted through RA/M. In order to conduct proper RA/M at all levels of the debate, one must first adopt an evaluatively modest approach to thinking. Judges have good reason to broadly adopt evaluative modesty, especially considering the Principle of Charity.

Second, judges should allow and encourage 2NR and 2AR weighing. It has become a frequent occurrence to see debaters argue that one should not be allowed weighing in the 2NR/2AR that was not explicitly made in the 1NC/1AR – obviously I see this as problematic.

Initially, ‘weighing’ as we know it drives debate – whether thinking about it theoretically or practically, I’ve already demonstrated that risk assessment over impacts in particular is the primary route to the ballot, since overall likelihood and severity of an impact motivates what we do about it. It then seems unreasonable to say that debaters should be denied the opportunity to weigh at certain parts of the debate.

Next and likewise, the 2NR and 2AR are speeches that are more-or-less intended for weighing. In CX debate, the 2NR and 2AR primarily weigh relevant arguments necessary to win whatever positions are left. This is because at a certain point, the value of warrant comparison diminishes in overall calculation of risk. The same ought to be true in LD debate, where (i) the 2AR is half the length of the 2NR, and is the last speech and also (ii) the 2NR is the place where the negative must ‘choose what to go for’. It is even more unreasonable to say, then, that debaters should not be allowed to weigh impacts in their last rebuttal speeches. While I believe that new warranting or construction – even for impacts - should not be allowed in the 2NR or 2AR (unless there are extenuating circumstances), the same is not true of weighing impacts.

Finally, Let’s appreciate debate coach Adam Torson’s (2012) article Three Things You Can Do to Improve Your 2NR to cement my point. Torson identifies collapsing down to a concentrated core of arguments and positions, preplanning of this collapse, and exploitation of 1AR errors as criteria for high-quality speeches. Yet, what Torson is really describing is how the 2NR ought to look. The 2NR must wait to hear the 1AR responses before being able to decide ‘what to go for’. Thus, accurate and thorough weighing by the negative can only take place in the 2NR. And since the 1AR determines the 2NR, the 2NR determines the 2AR. The 2AR cannot effectively sell a story based on the linear progression of the debate - especially in such a limited timeframe - without employing impact weighing. Especially since the 2AR is where the affirmative ‘chooses what to go for’.

In summation, so long as the impact were developed in the 1NC/1AR, debaters should be allowed to wait make distinct weighing arguments until the 2NR/2AR. Looking at the implications of RA/M as well as the structure of debate itself, judges have good reason to allow and encourage this practice

Third, rigorous, risk-responsive Role of the Ballots/Frameworks should be rewarded. I can’t imagine a single judge or educator in the community that hasn’t had the following thought at least once per tournament: “what in the name of all that is fair and educational does this framing mechanism actually mean?”. Let’s start by considering an ideal ROB. Coaches and brothers Jacob and Matthew Koshak (2014) note that all weighing mechanisms are really ROB claims, since everything from a topicality voter to a policymaking-utilitarian framework “…defines the function of the debate space according to some end goal of debate”. They then explain that rigorous ROBs are appropriately justified, accessible to other competitors and judges, and have substantive arguments that are linked back to it. The reasoning for these observations seem self-evident, so let’s expand on them through RA/M thinking-language.

Initially, the justifications used by debaters for their ROBs should be refuted and evaluated in terms of likelihood and severity. Too often, debaters cite an anecdote or line of jargon and merely assert that it makes the ROB totally true, with many judges being all too willing to comply. Judges should temper their confidence in ROB implications by the relative level of risk the ROB being true in order to eliminate inconsistencies in framing evaluations.

Next, accessible ROBs should define what is considered risk and how it is calculated. This doesn’t have to be put in normative RA/M language explicitly and/or artificially. Rather ROB texts need only be clear and fair in how debates under them should take place – keeping in mind the inevitable backdrop of uncertainty. Too many ROB texts suffer from language choices that are either incredibly vague or unnecessarily exclusionary. RA/M thinking offers inspiration for writing ROB texts that explain and apply their weighing mechanism, without demanding an awkward formula.

Finally, linking one’s substantive arguments to a ROB should be done in language that reflects both the justifications and wording of that ROB. More specifically, since RA/M thinking-language will varyingly pervade the construction of the ROB, the reciprocal degree and qualia of RA/M thinking should be used for argument linkage.

I thus contend that RA/M thinking-language provides valuable lessons in arguing and evaluating ROBs, not just for the reasons provided in my analysis of Overing’s thoughts, but also because what the Koshaks’ objectively consider a rigorous ROB innately lends itself to consideration of risk. In all cases, judges should prefer (though not exclusively, as per evaluative modesty) ROBs that are rigorous and responsive to risk. Through this, judges would accomplish two things: (i) creation of an incentive for the debate community to write better ROBs and (ii.) achievement of more careful evaluation of framing issues.

 

IV.          Practical Recommendations

Finally, I’ll offer some practical recommendations for debaters on application of RA/M skills. As with judges, at the very least debaters should ask “what’s the risk?” as a last question before delivering an argument. Student competitors should be able to develop successful habits from these pieces of advice, even if they don’t entirely ascribe to an RA/M model of debate.

 

First, debaters should appreciate and reflect the importance of uncertainty and risk. This statement can be understood in a variety of ways.

Initially, debaters should recognize that their opponents, judges, and they themselves possess particular degrees of certainty about how they practice debate. I see debaters frequently assume that their opponent’s characterization of an argument is simply true and abandon contestation, and as well see their judge’s paradigm as totally static and exhaustive. Appreciating uncertainty of others in these situations can give students the courage to push their argumentative boundaries – ultimately this makes for more nuanced and thorough debates.

Next, debaters should prioritize explicit risk comparison in their rebuttal speeches. Ultimately, most judges will still evaluate rounds primarily through impact calculus, so students can help influence how a judge sees the entire speech by weighing at the beginning. There is good reason that style of framing is so popular and long lasting, so debaters should continue the practice and improve on it by more formal assessment of risk.

Finally, debaters should be creative in their application of RA/M thinking. As I’ve mentioned, RA/M still is ultimately a question of all parts of the debate, not just the impacts. Debaters can contextualize a perm-doublebind by looking at the particular risk factors at play – perhaps the Kritik’s impact to the perm is quite likely, but not so severe that combining a policy option would obliterate the alternative. Debaters can evaluate the strength of a competing-interpretations voter on how likely norm-setting is, not just why it is significant. These are but a few examples of places in debate where risk is a highly salient issue that students can resolve with contextualized RA/M.

The best debaters are those who rise to the challenge of uncertainty, and those than win respond with RA/M skills – whether they know it or not.

Second, debaters should plan their positions and rounds backwards. Extending Torson’s call for 2NRs to collapse the debate in a strategic manner, there is an overall imperative to chart a specific path for achieving set goals as a condition for victory. Judges vote on arguments in the last rebuttals. Winning last rebuttals concentrate on core arguments and positions, and motivate them through weighing. The constructive arguments are only as valuable as their ability to set up last rebuttals. Taken together, it becomes easy to understand why we should plan debate rounds backwards and what that means: everything one does to prepare and execute their advocacy is necessarily filtered down to the first minute or two of the 2AR/2NR – debaters should prepare and execute accordingly. RA/M is invaluable in navigating ‘backwards planning’.

Initially, RA/M studies establish the idea that individuals respond to uncertainty with risk – so understanding that pivotal last minute or two as primarily concerning synthesis of RA/M thinking makes sense. I often ask my students: “does this position allow you the argumentative room to convince the judge to vote for you even if they only listened to your 2NR/2AR impact calculus?”. I encourage debaters to ask similar questions.

Next, RA/M compels debaters to anticipate, based on evaluation of risk, potential counter-arguments to their position and how they might be executed. Prioritizing frontline work is best done by estimating the likelihood and significance of the other sides’ position. This sort of debate ‘war-gaming’ sets up debaters to plan their rounds backwards – war-gaming (figurative or literal) innately relies on consideration of risk.

Finally, RA/M is specifically designed for planning future action in the face of uncertainty. Things converge over time based on their likelihood and severity from uncertain to certain, much in the way debate rounds converge in those pivotal few minutes in the 2NR/2AR. RA/M is a method by which we can secure goal achievement against these convergences. The positions we plan to go for and the arguments we will pick to support them should thus be based on the how likely they are to be significant to the judge in the context of what your opponent might advocate for.

Overall, debaters necessarily must plan debates backwards – sound RA/M thinking cements the case for this recommendation.

Third, debaters should develop risk cores. Given the finite number of ethics and impacts that can be defended on any given topic, debaters would be wise to pre-fabricate blocks of evidence characterizing risk as well as analytic skeletons that set up risk comparison.

Initially, the most successful debaters and debate teams are those who maintain universal ‘cores’ of impacts, calculi, and framing mechanisms. For instance, my team keeps a file that outlines each type of impact calculus for most common impacts and impacts turns. A particular utilitarianism framework has been floating around the community for some time now. In either case, banking of knowledge applicable across topics improves student comprehension and alleviates preparation burdens. Impact cores with the best research are those whose evidence is from or cites data from risk-related research fields, such as conflict management studies, future studies, intelligence communities, and of course RA/M studies proper. Framing cores with the best research are those whose evidence is from or cites data from philosophers concerned with issues of cognition, language, and rationality as they relate to ethics. Nearly every imaginable impact or ethic has been at least studied after-the-fact by academics and professionals who are concerned with risk and uncertainty as it pertains to those fields.

Additionally, the most successful debaters are those that practice argument comparison. The more practice, the more flexible and adaptable a debater becomes – comparison becomes ‘second-nature’. The best way for debaters to practice argument comparison – outside of drills – is for them to write out frontlines that either explicitly compares their argument to that of another, or framing their arguments in a comparative way while inserting brackets for future contextualization. Argument comparison – in this case primarily that of impacts and ethics - innately relies on estimation of each argument’s risk and the valuation of different measures of risk. Beyond building tangible skills, construction of these skeletons will also serve to create a universal file that can quickly be drawn from.

Debaters have good reason to develop risk cores for practical and educational reasons.

 

V.             Postscript: Defending Evaluative Modesty

It is necessary to respond to two common challenges to epistemic modesty – or rather how they implicate RAM. This is the case given that my advocacy for RA/M owes much to the concept.  Elsewhere, Overing (2016) deals with the problems of ‘Casualness’ and ‘Incommensurability’ after an exchange with Christian Chessman.

First, Casualness argues that it is inappropriate to use modesty to compare things like policymaking to performance, primarily because modesty may not take seriously the real-world experiences of debaters. Overing counters with the following: (i) articles (primarily Smith and Vincent) that elevating debaters’ critical performances don’t necessarily exclude other impacts, (ii) poorly explicated arguments for a performance aren’t persuasive in their own right, and (iii) modesty implies that debaters arguing against performances with policy must establish the educational and procedural benefits of their discourse in order to effectively engage.

I won’t expand this debate (especially since Overing’s arguments are decidedly compelling in themselves), but rather filter it through RA/M. One might object that the risk of a policy impact like nuclear war would always have to take priority over the reading of a personal narrative and thus render RA/M inappropriate. Putting aside (i) and (ii), (iii) answers this concern – the policy debaters logically can’t weigh nuclear war in the pre-fiat world with the same significance. Instead, these debaters must give reasons why learning and simulating policymaking initiatives like the plan/advantage is a better pedagogical practice for building real-world skills and character in students than the performance of the narrative. Further still, some debaters who make policy arguments only do so as a heuristic or intervention tactic – in order to learn about how to combat the state, not to endorse it. In each case R/AM resolves these concerns – whether assigning nuclear war incredibly low risk as an impact, or compelling both debaters to analyze the risks that motivate endorsement of their performance. Not to mention RA/M’s potential applications to evaluating (ii), which would determine the risk of the performance based on how it is explicated by the debater, not idealized by the judge. This backs my earlier observation that all performance is really argument in the context of debate.

Second, the Incommensurability argument says that there can be no common frame that a modest judge could use to evaluate such disparate impacts. Overing responds with the following: (i) ‘better-debating’, determined in context, is the default metric by which all arguments are evaluated – but what that means for Kritikal and performance debate isn’t clear, (ii) debaters should create ROBs with more specific metrics so engagement by the other debater can take place, and (iii) incommensurability takes place at all levels of debate, such as when competing-interps vs. reasonability aren’t weighed.

Overing admits that his response to this problem doesn’t leave us with conceptual solutions. Let me offer that solution.

Initially, insofar as my answers to the problem of casualness are sound, then indeed there is already a practical metric for policymaking vs. performance – comparison of risk that determines which advocacy is more likely and significant in positively building a students’ real-world skills and character (This is in my mind the best, but not necessarily the only, candidate to work from).

Correspondingly, RA/M provides the metric for better debating at the conceptual level. Consider a re-characterization of Overing’s answers: (i) and (ii) essentially say that since ‘better-debating’ is determined in context, debaters should write more specific and engageable metrics into their ROBs. Since debate implies that the policy initiative and performance are already being compared, some incommensurability is inevitable regardless of the mutual-exclusivity claims of either debater – that’s (iii). This creates a practicality constraint on the problem of incommensurability; even if it is theoretically true it must be suppressed since it is both inevitable in debate and malleable through context. The Principle of Charity demands as much for debate to occur. So all evaluative modesty needs is a formula to decide disparate positions in context once one is modest. RA/M is the direct and obvious answer – as long as uncertainty inevitably makes us risk managers, we should formalize a model that evaluates all arguments’ plausibility determined through likelihood, severity, timeframe, and sequencing. This applies to both the justifications for and the implications of any argument, including ROBs and performances. So determining the better debater in context means evaluating the comparative risk of a student’s advocacy building real world skills and character. Refer to the third paradigm issue on ROBs that I recommend for some additional potential ways some of these issues could play out.

Neither Casualness or Incommensurability are persuasive responses to evaluative modesty or RA/M. In fact, RA/M is the only reasonable option that critics like Chessman could adopt – especially since they offer no real alternatives.

 

VI.          Work Cited

M. Scanlon, “Metaphysics and Morals”, Proceedings and Addresses of the American Philosophical Association 77, No. 2 (2003): pp. 7-22

Michael C. Jackson, Systems Thinking: Creative Holism for Managers (West Sussex: John Wiley & Sons, Ltd., 2003): Ch. 1, pp. 1-13

Nihad Nakaš, “Three Lessons About Risk Management from Everyday Life”, Center of Excellence in Finance, 21 Nov. 2017. Available Online: http://knowledgehub.cef-see.org/?p=1399

Jeff B. Copeland, “Risk Analysis vs. Risk Assessment: What's the Difference?”, FAIR Institute, 22 Aug. 2017. Available Online: https://www.fairinstitute.org/blog/risk-analysis-vs.-risk-assessment-whats-the-difference

Bob Overing, “Recovering the Role of the Ballot: Evaluative Modesty in Academic Debate”, Presentation at the Alta Argumentation Conference, 2015. Available Online: https://www.academia.edu/18308105/Recovering_the_Role_of_the_Ballot_Evaluative_Modesty_in_Academic_Debate

Don Welch, A Guide to Ethics and Public Policy: Finding Our Way (New York: Routledge, 2014): Ch. 4, pp. 72-75

Zygmunt Bauman, Towards a Critical Sociology: An Essay on Common Sense and Emancipation (Abingdon: Routledge, 2010): Ch. 3, pp. 98-99

Gregg Henriques, “Trump: A Risk Assessment Perspective”, Psychology Today, 18 Mar. 2016. Available Online: https://www.psychologytoday.com/us/blog/theory-knowledge/201603/trump-risk-assessment-perspective

“Impact Calculus”, National Speech & Debate Association, no date. Available Online: https://www.speechanddebate.org/wp-content/uploads/CX.Vocab_.Impact-Calculus.Handout-1.docx

“Free Risk Assessment Matrix Template”, TeamGantt, no date. Available Online: https://www.teamgantt.com/risk-assessment-matrix-and-risk-management-tips

Trudy Govier, Problems in Argument Analysis and Evaluation, Windsor Studies in Argumentation Volume 6 - Updated Edition (Ontario: University of Windsor, 2018): Ch. 7, pp. 121-126

Adam Torson, “Three Things You Can Do To Improve Your 2NR by Adam Torson”, Victory Briefs, 8 March 2012. Available Online: https://www.vbriefly.com/2012/03/08/201203three-things-you-can-do-to-improve-your-2nr/

Jacob Koshak & Matthew Koshak, “Embracing Difference: The Role of Role of the Ballot Arguments”, Victory Briefs, 21 Feb. 2014. Available Online: https://www.vbriefly.com/2014/02/21/20142embracing-difference-the-role-of-role-of-the-ballot-arguments/

Bob Overing, “Evaluative Modesty: Reply to Chessman”, The Meta – Premier Debate, 23 Jan. 2016. Available Online: www.premierdebate.com/articles/reply-to-chessman/?fbclid=IwAR3o5bdJ7wDpbKuPMMDVF0uKqRaCBewquPaqEHFlkBUuSXhjS83LL3S2FYA