You are on page 1of 4

Tarski’s Account: the Official Version

Advanced Topics in Philosophy of Logic

Jason Turner

The Tarskian account of logical consequence says, in essence, the following:

∆ ⇒ P iff every re-interpretation of the non-logical terms in that


makes ∆ true also makes P true.

If we simplify matters by focusing on logical truth, it says:

P is a logical truth iff every re-interpretation of the non-logical terms


in P makes it true.
But this isn’t quite what he said — not officially, anyway — and it’s worth getting
clear on what he did say.

1 Satisfaction
The first thing we need to get clear on is the notion of a sentential function. This
sounds fancy, but it isn’t: a ‘sentential function’ is basically just a sentence, except
it might have some variables in it that aren’t bound by any quantifier. We call
variables of these sorts open or free. For instance,

(1) ∼P(a) ∨ (R(x) ∧ ∼∀yF(y))

is a sentential function (what in other circumstances we call an open sentence). It


has one variable — ‘y’ — which is ‘bound’ by a quantifier. But it has another
variable, ‘x’, which is free.
Sentential functions can’t be true or false ‘all by themselves’, any more than
a sentence like

(2) It is tired

can be true or false ‘all by itself’. Before (2) to have a truth-value, a vale for ‘it’
needs to be specified. If ‘it’ picks out a rock teetering on the edge of a cliff, (2)
is true; if it picks out a sturdy building, (2) is false.
Likewise, we can’t assign (1) a truth-value until we’ve specified a value for
‘x’. If ‘R’ means is red, then the sentential function (or open sentence) ‘R(x)’ will
be true relative to an assignment of a red thing to ‘x’, and false relative to an
assignment of a blue thing to ‘x’.
We can make this notion a bit more formal. Let a variable assignment or a
sequence (they mean the same thing, at least as Etchemendy uses the terms) be
a function from variables to objects. A sentential function such as ‘R(x)’ is true
relative to any sequence that assigns a red thing to ‘x’. In this case, we say such
an assignment satisfies‘R(x)’. And any sequence that does not assign a red thing
to ‘x’ makes ‘R(x)’ false; such a sequence does not satisfy ‘R(x)’.

1
2 Higher-Order Variables
We’re used to using variables as we did above, because first-order logic lets us put
variables into predicate positions. And we know what kind of things first-order
variables get assigned: they get assigned objects.
What you might not be used to is second-order variables, or predicate variables.
But the idea isn’t that tricky. Just as we can put variables wherever we could
put a name, we can also put a variable wherever we could put a predicate. For
instance, as well as the sentence ‘P(a)’, we could also have the sentence

(2) Y (a)

where ‘Y’ is a predicate, or second-order, variable.


Just like first-order (or name-like) variables, second-order variables need to
have assignments in order for the sentences they occur in to be true or false. But
we don’t assign things (like dogs and baseballs and countries) to second-order
variables. Rather, we assign potential extensions — sets of things (and, for many
placed predicates, sets of ordered -tuples of things).
For instance, we might assign a set of objects S to ‘Y’. Then (2) will be true,
relative to that assignment, if and only if the thing that ‘a’ names is in S.
Our sequences won’t just assign objects to name-like variables, but they will
also assign potential extensions to predicate-like variables. A sentence such as

(3) Y (x)

will be satisfied by every sequence that assigns something to ‘x’ that is a member
of the set it assigns to ‘Y’.
Two final comments. First, our sequences are meant to assign something to
every single variable, whether or not it shows up in the sentence we’re interested
in. So if s is a sequence, it doesn’t just assign something to ‘x’ and something to
‘Y’, but also assigns something to ‘z’, and ‘W’, and so on and so forth.
Second, we’re focusing here on circumstances where we just have name-
like and predicate-like variables. We don’t, for instance, have quantifier-like
variables or connective-like variables. But we didn’t have to restrict ourselves
in this way. We could consider, for instance, variables that could go where
connectives go, so that e.g.

(4) F(a) V G(a)

was also a sentential function, with ‘V ’ a connective-like variable. Then our


sequences would also have to assign something to these variables, and we could
ask which sequences satisfy sentential functions like (4). However, we’re not
going to do this: things get pretty technically complex when we start asking
what kind of thing a variable like ‘V ’ gets assigned, and we don’t really have to
answer these questions in order to make sense of Tarski.

2
3 Tarski’s Theory: Sequences Satisfying Sentential Functions
Suppose we have a language L, and it’s logical expressions are all in E. And
suppose further that P is a sentence of L.
Now, L might have variables of the sort that can go wherever any non-logical
expression can go. But, on the other hand, it might not. So let’s let L+ be an
expanded version of L that does have variables that can go in for L’s non-logical
terms. That is, if e is any simple expression in L, and if e is not one of the logical
terms,1 then L+ has a variable that can in principle be replaced for e in any
sentence of L.
Now, here’s Tarski’s theory, stated officially. First, let P0 be the sentential
function you get by uniformly replacing all of P’s non-logical terms for variables
of the right type. For instance, if P is the sentence
(5) F(a) ∨ G(a)
and ‘∨’ is one of the logical expressions, then P0 could be
(6) X(z) ∨ Y (z)
Notice: when the same term (‘a’) appeared twice, we traded it in for the same
variable (‘z’) both times. And different terms (‘F’ and ‘G’) got traded in for
different variables (‘X’ and ‘Y’). This is crucial: in moving from P to P0 , we must
put same for same and different for different.
Now Tarski’s official theory says:
P is a logical truth iff P0 is satisfied by all sequences.
Note well: there are tons and tons of sequences — infinitely many, perhaps.
This is because every possible assignment of objects to first-order variables and sets
to higher-order variables counts as a sequence. And Tarski’s official theory really
is talking about all sequences: every single coherent way of slapping variables
and values together counts as a single sequence. If every one of these makes a
sentential function true, then its associated sentence is a logical truth.
How does this compare with our original, intuitive gloss? A sequence rep-
resents one way of assigning meanings to — that is, a way of interpreting —
the variables of the sentential function. But those variables themselves repre-
sent the non-logical terms that they ‘went in for’ in the transition from P to
P0 . So the pairing of interpretations-to-variables plus the pairing of variables-
to-non-logical terms, when stitched together, really just amounts to a pairing of
interpretations to non-logical terms. And since a sentential function has to be satis-
fied (that is, made true by) by every sequence, this is the same as saying that the
original sentence has to be made true by every interpretation to its non-logical
terms.
1 Warning:Etchemendy (confusingly!) calls these ‘variable terms’. Note: ‘variable terms’, in
Etchemendy’s lingo, are not variables! They are terms of the language that aren’t ‘held fixed’ for
evaluating logical truth and consequence; they are the terms that aren’t in the set E of ‘logical
expressions’.

3
4 Etchemendy D-Sequences, and All That Rot
At the end of Chapter 4, Etchemendy goes (more or less) through a ‘derivation’
of Tarski’s model-theoretic account from the above, sequence-satisfaction-based
account.
I’m not going to bother re-creating this here, because I think Etchemendy
introduces more trouble than he solves. But I will try to explain just what’s
going on. As we noted before, there are essentially two things that get pasted
together: the sequences which assign interpretations to variables, and the move
from P to P0 .
One thing to notice is that — so long as we put same for same and different
for different — it doesn’t matter what variables we put for what terms. Consider
the following two sentential functions that correspond to (5): could just as well
be traded in for
(5) F(a) ∨ G(a)
(6) X(z) ∨ Y (z)
(7) Z(x) ∨ W(x)
And now consider two (mathematical) sequences like this:
Sequence 1 Sequence 2
X 7→ {1, 2} Z 7→ {1, 2}
Y 7→ {2, 4, 6} W 7→ {2, 4, 6}
z 7→ 2 x 7→ 2
.. .. .. ..
. . . .
Sequence 1 treats (6) in exactly the same was as Sequence 2 treats (7): changes
in the variables used in the move from (6) to (7) are canceled out, as it were, by
corresponding changes in Sequences 1 and 2.
As a result, detouring through variables and sentential functions gives us an
extra layer of complexity that we don’t really need. Rather than going from (5)
to (6) or (7), and then to a sequence to ‘interpret’ the variables, we could go
directly from (5) to these interpretations:
F 7→ {1, 2}
G 7→ {2, 4, 6}
a 7→ 2
This kind of assignment — directly from non-logical terms of the language to
new interpretations — is what Etchemendy calls a D-sequence. And we can
(super roughly!) define D-satisfaction as something like: if the terms had been
variables and the D-sequence had been a real sequence, it would have satisfied
(5).
But D-sequences are Tarski’s models, and D-satisfaction is Tarsian truth on a
model. (Notice: these models have no domain. That is important, and features
in some of Etchemendy’s arguments.)

You might also like