The Syntactic Web

In reality the “semantic web” is, and can only ever be, a ”syntactic web”. Syntax is merely form — the shape of arbitrary objects called symbols , within a formal notational system adopted by an agreed and shared convention. Computation is the rule-based manipulation of those symbols, with the rules and manipulations (“algorithms”) based purely and mechanically on the shapes of the symbols, not their meaning — even though most of the individual symbols as well as the combinations of symbols are systematically interpretable (by human minds) as having meaning.

Semantics, in contrast, concerns the meanings of the symbols, not their shape, or the syntactic manipulation of their shapes. The “symbol grounding problem” is the problem of how symbols get their meanings, i.e., their semantics, and the problem is not yet solved. It is clear that symbols in the brain are grounded, but we do not yet know how. It is likely that grounding is related to our sensorimotor capacity (how we are able to perceive, recognise and manipulate objects and states), but so far that looks as if it will only connect symbols to their referents, not yet to their meaning. Frege‘s notion of “sense”, which is again just syntactic, because it consists of syntactic rules, still does not capture meaning. Nor does formal model-theoretic semantics, which likewise merely finds another syntactic object or system that follows the same rules as those of the syntactic object or system for which we are seeking the meaning.

So whereas sensorimotor grounding — as in a robot that can pass the Turing Test — does break out of the syntactic circle, it does not really get us to meaning (though it may be as far as cognitive science will ever be able to get us, because meaning may be related to the perhaps insoluble problem of consciousness).

Where does that leave the “semantic web”? As merely an ungrounded syntactic network. Like many useful symbol systems and artificial “neural networks”, the network of labels, links and connectivity of the web can compute useful answers for us, has interesting, systematic correlates (e.g., as in latent “semantic” analysis, and can be given a systematic semantic interpretation (by our minds). But it remains merely a syntactic web, not a semantic one

The Cognitive Killer-App 2006-04-16

Kevin Kelly thinks the web is not only what it really is — which is a huge peripheral memory and reference source, along with usage stats — but also a kind of independent thinking brain.

It’s not, even though it has connections, as does a neural net (which is likewise not a thinking brain).

KK is right that googling is replacing the consultation of our own onboard memories, but that is par for the course, ever since our species first began using external memories to increase our total information storage and processing capacity: Mimesis, language and writing were earlier, and more dramatic precursors. (We’re talking heads who already feel as helpless without our interlocutors, tapes and texts today as KK says we all will — and some already do today — without the web.)

And KK misses the fact that the brain is not, in fact, just a syntactic machine, the way the web is: There is no “semantic” web, just an increasingly rich “syntactic web“.

Nor (in my opinion) is the web’s most revolutionary potential in its role of periperal mega-memory and hyper-encyclopedia/almanac. It is not even — though it comes closer — in its interactive Hyde-park role in blogs and wikipedias. That’s just an extension of Call-In Chat Shows, Reality TV, acting-out, and everyone-wants-to-be-a-star. We’ve all had the capacity to talk for hundreds of thousands of years, but most of us have not found very much worth saying — or hearing by most others. The nature of the gaussian distribution is such that that is bound to remain a demographic rarity, even if the collective baseline rises — which I am not at all sure it’s doing! We just re-scale…

No, I think the real cognitive-killer-app of the web is the quote/commentary capability, but done openly — “skywriting”. At the vast bottom level this will just be the Hyde-Park “you know what’s wrong with the world dontcha?” pub-wisdom of the masses, gaussian noise. But in some more selective, rigorous and answerable reaches of cyberspace — corresponding roughly to what refereed, published science and scholarship used to be in the Gutenberg era — remarkable PostGutenberg efflorescences are waiting to happen: waiting only for the right demography to converge there, along with its writings, all Open Access, so the skywriting can begin in earnest.

Stevan Harnad

Paying the Piper 2006-03-26

Richard Poynder (RP): “Digital Rights Management may not prove workable in the long-term and can always be circumvented, so most creators are probably moving (like it or not) into a give-away world”

I don’t know whether all that’s true, practically and statistically, but if so, I’m not sure the outcome will be grounds for cheeriness. It turns creativity into a pop, distributed enterprise (which has some plusses, in some cases) but removes rewards from a kind of individual creativity that has brought us much great work in the past. Selfless creators there have been too, in the past, not motivated by desire or need for personal gain. But does that cover all, most, or enough if it?

[from another interlocutor] ‘Obviously, some artists of all kinds will always produce because they must; but if they have to do it as amateurs because they must earn their bread as janitors or professors of geology, then they will do much less work and it will be far less developed (as well as using only cheap materials). As all of them will be isolated from a (nonexistent) mainstream of common understanding and encouragement, styles will not develop, nor will there be any building upon others’ achievements. Each artist will remain a wild tree with small hard fruit, rather than a cultivated and well-fed tree giving a lot of fine sweet fruit.’

RP: ”Apart from the isolated eccentric, all creators want their creations to be as widely distributed and read/listened to/seen as possible.”

Yes, but not necessarily at the cost of forfeiting any prospect of being able to make — or even to aspire to make — a fortune (or, in some cases, even to make a living). I have no idea about the true proportions, hence the statistical uncertainty, but I do raise a point of doubt, about a potential loss of a form of individualism that, on the one hand, resembles the materialist capitalism neither of us admires or embraces, but that, in the Gutenberg age (like religion, which I admire just as little capitalism!) managed to inspire a lot of immortal, invaluable work. Where mass-market subsidiaries/services (not exactly the hub of most human creativity) are the only real revenue sources, collective efforts and wide exposure are not necessarily incentive enough to keep attracting enough of the selfish-gene-pool’s higher-end gaussian tail to making the kinds of contributions that have been the (inevitable, gaussian) glory of the history of human culture until now. Yes, new (collective, collaborative, distributed) forms of creativity are enabled, but I am lamenting the disabling of some of the older ones. It is not at all clear that they were spent.

And although collective, distributed adaptivity is perfectly congenial to our (blind-watch-) Maker, that is not in fact the basis on which He fashioned our current (selfish-) Genome. Apart from inclusive-fitness fealty to kin — which occasionally sublimates into selfless generic humanitarianism and altruism — most of our motivation is, frankly, individualistic, which is to say selfish, hankering for territory, dominance, and the tangible material rewards that can still engage the tastes instilled by the ancestral environment that shaped us. That’s what makes apples taste sweet, and sweetness desirable.

Scientists and scholars have always been a minority, and exceptional, in that they sought a cumulative, collective good: learned inquiry. Learned research was (at least since dark monkish days) a public, distributed, collective — and thereby self-corrective — enterprise. So, as I say, not much change there, in the PostGutenberg Era. But not so for music (which I hardly lament, since music as art has already died an unnatural premature death anyway, with the arrival, and departure, of atonalism, no thanks or no-thanks to digital doomsday), nor for literature (which was already suffering from block-buster economics before the digital day, but may now be dealt the death blow, leaving only the pop, How-To and movie-wannabe genres viable); film, I would say, has already done itself in.

So, as I say, it depends on the statistical proportions, on which I have no objective data. So too for hacking — the new kid on the block — for which it is still not clear whether individualism or collectivism is the most fruitful procreative mode.

RP: “The digital world allows diffusion, access and collaboration in a way never before possible. Not only does DRM threaten to take away all those benefits but, given the current status of the technology, it generally imposes even greater restrictions than were experienced in the pre-digital world. That’s bad news for creators.”

Bad news for : certain kinds of creators. The rest is the statistics of the kinds. Let me put my cards on the table: I’m not defending the divine right of the McSpielbergs of Gaia’s Gollywood to make limitless bundles on their showy, empty pap. They are not my models. Shakespeare and Mozart are.

RP: “Cory Doctorow did some calculations showing that the potential earnings of the average scribbler (we’re not talking J K Rowling here) have been on a downward curve for a long time, indeed long before the digital world. This is a product of something else, but can surely only be exaggerated by the introduction of digital technologies.”

I don’t know the works of CD or JKR, but I’d bet they’re not quite of the rarefied calibre of the WSs and WMs I had in mind! (What’s the point of regressing the rare masterworks on the menial mean?) Nor do I think WS or WM would have ever done an analysis like that, in reckoning whether or not to go OA with their work… (Actually, I think both WS and WM earned most of their rewards from the analog world of performance, not the digital-code-world of composition, but I’m certain that wasn’t true of Beethoven.)

RP: “Most (if not all) are (by the standards of main street) a little reckless about their careers. Many also seem to have a disdain for money. Given that we currently live in a world far too dominated by market forces and bean counting, I find this very encouraging. As the bard said ‘Getting and spending, we lay waste our powers’.”

A (big) part of me resonates with this too. But don’t forget that hackers are demographically anomalous, being a ant-hill of Aspergians and worse; it’s all you can do to get them to change their underwear, let alone balance their bank-accounts. And there have been selfless geniuses in other fields too. I just worry about the potential loss of the future ones that are not blind to the presence/absence of the very possibility of material glory, alongside spiritual!

Hal Varian, by the way, made similar statistical calculations about the likelihood of big-bucks for most authors. And I countered with the same dream-of-apples argument I raise with you: How many genotypes are not driven by that? are they enough? and would the loss of the potential “market” for the rest be insignificant, in the scheme of things? (Without statistics on this impalpable stuff, I don’t think anyone can say.) Reducing it all to McDisney re-use rights for high-schoolers, as LL does, just goes to show how the world of “creativity” looks to a well-meaning philistine: a reductio ad absurdum.

RP: “You are right to say that we don’t really have the necessary statistics to reach any firm conclusions, and so it is speculation. You are also right to say that there is no cause for cheeriness in this. But then each decade that passes we seem increasingly like ants in an anthill (Raymond talks about this in terms of the scaling up of society), and individualism features less and less. Maybe that’s just the way it is going to be.”

Ok for autists, but not for all artists…

RP: “It occurs to me, however, that if your model is the likes of Mozart, then will not future Mozarts continue to do what they always did, regardless of material reward? From what I know of Mozart (not enough by far) he didn’t seem too driven to do what he did in order to pay the bills; and when he did earn money was it not from playing the piano, rather than composing?”

That was still early days, and off-line coded composition had not yet become an autonomous livelihood, as distinct from on-line analog performance; but by the day of Dickens, Dostoevsky, Beethoven and Brahms it had (and Beethoven in particular was quite a copyright maven!)

RP: “If so, is that not again the same model (to all intents and purposes) of giving away your creation and making money from associated services. As I say, I may be wrong about Mozart, but is it not the case that people who are really gifted and driven to create just get on and do it, and rarely think about how they will pay the bills?”

The ones that survive to be heard from. (And do not under-estimate the prospect of potential riches as an extrinsic motivator. The failure of DRM would wipe that out as well, and thereby perhaps render a wealth of human promise still-born. I’m not saying there will not still be some intrinsically motivated stout-hearts. I’m just worrying about how many, and which, and, most important, which ones will be lost.)

RP: “Of course if the universe is, as you say, a “mindless, feelingless machine” (in which we are all trapped), then none of this really matters and individualism and creativity are all for naught right?”

That’s not quite what I said! The universe is mostly feelingless, but organisms are not. Functionally speaking, they may as well be, since their feelings can have no independent causal power, on pain of telekinetic dualism, but feelings they are nonetheless. So whereas aesthetics, like all other feeling, does not “matter” functionally, in that it has no causal role, it not only matters but is what we mean by anything’s “mattering” at all, affectively. “Mattering” is an affective matter!

RP: “ (I was listening to the World Service the other night, which was talking about Dawkins’ The Selfish Gene — currently celebrating its 30th birthday I believe — and he was saying how people used to write to him and say that they hadn’t slept for three weeks after reading the book, and could see no point in continuing to live). ”

Well, the handwriting was already on the wall with cosmology; the biosphere is just a bit of it. If they want to wile away the sleepless hours, they should puzzle over the mystery of how/why matter feels after all, albeit ever so superfluously!

So What Else Is True? (2006-03-17)

The good thing about a blog is that you can answer questions even when you haven’t been asked. A friend just sent me What We Believe But Cannot Prove: Today’s Leading Thinkers on Science in the Age of Certainty (edited by John Brockman) Harper 2006. But before I even open it – well I did peek and saw it’s mostly cog-sci light-weights rather than hard-sci heavy-hitters – I wanted to put it on record that Descartes already did a good job on this in the Age of Enlightenment.

Descartes asked the hard questions about certainty (“what can I know for sure?” “what is open to doubt?”) and his conclusion seems to be just as certain today: There is only one other thing I can know for sure, apart from what can be proved (as in logic and mathematics), and that is the fact that I feel (if/when I feel). Descartes overstated it, suggesting that when I’m thinking, I can’t doubt that I’m existing too (“Cogito Ergo Sum”), but that has always been much too theory-ridden and equivocal. What’s meant by “I,” or even by “existence”? Big words. But in baby-talk, it’s just as self-contradictory to say it’s true that “I am not feeling” when I am in fact feeling (“sentitur ergo sentitur”) as it is to say that both P and not-P are true. (No need to “define” feeling by the way; we all know what it feels like to feel, and anyone who says otherwise is just bluffing. [Pinch him!].)

But that’s all. Nothing else is certain but those two kinds of truths (the formal truths of mathematics, provably true on pain of contradiction, and the self-demonstrating truth of experiencing itself – which does not, by the way, mean that experience conveys any other certainties). All else is mere probability. In particular, all the truths of science. (It’s certain that things feel like whatever they feel like, that they seem whatever they seem; anyone who doubts that is on a fool’s errand. But whether they really are the way they seem is an entirely different matter.)

But in what sense do we live in the age of certainty? Because of the naïve scientism of some of us (“scientists have proved that…”)? or the even more naïve fideism of others (“credo quia absurdum”)?

Now I shall peek in the book and see what these bright lights have to say…

Stevan Harnad

Skywriting (c. 1987)

Sky-Writing

(Submitted to and rejected by New York Times Op Ed Page, 1987; finally appeared in Atlantic Monthly May 2011)

Stevan Harnad
Behavioral & Brain Sciences
Princeton NJ

I want to report a thoroughly (perhaps surreally) modern experience I had recently. First a little context. I’ve always been a zealous scholarly letter-writer (to the point of once being cited in print as “personal communication, pp. 14 – 20”). These days few share my epistolary penchant, which is dismissed as a doomed anachronism. Scholars don’t have the time. Inquiry is racing forward much too rapidly for such genteel dawdling — forward toward, among other things, due credit in print for one’s every minute effort. So I too had resigned myself to the slower turnaround but surer rewards of conventional scholarly publication. Until I came upon electronic mail: almost as rapid and direct and spontaneous as a telephone call, but with the added discipline and permanence of the written medium. I quickly became addicted, “logging on” to check my e-mail at all hours of the day and night and accumulating files of intellectual exchanges with similarly inclined e-epistoleans, files that rapidly approached book-length.

And then I discovered sky-writing — a new medium that has since made my e-mailing seem as remote and obsolete as illuminated manuscripts. The principle is the same as e-mail, except that your contribution is “posted” to a global electronic network, consisting currently of most of the universities and research institutions in America and Europe and growing portions of the rest of the scholarly and scientific world. I’m not entirely clear on how “the Net,” as it is called, is implemented and funded, but if you have an account at any of its “nodes,” you can do skywriting too.

The transformation was complete. The radically new medium seemed to me a worthy successor in that series of revolutions in the advancement of ideas that began with the advent of speech, then writing, then print; and now, skywriting. All my creative and communicative faculties were focused on the lively international, interdisciplinary scholarly interactions I was having on the issues of intellectual interest to me at the time (which happened to arise from Searle’s “Chinese Room Argument” and eventually came to be called the “symbol grounding problem“). Who needs conventional publication when, within a few hours, the “article” you post on the Net is already available to thousands and thousands of scholars (including, potentially, all of your intended conventional audience), who may already be posting back e-responses of their own? I was in the dizzying Platonic thrall of sky-writing and only too happy to leave the snail-like scope and pace of the old epistolary technology far below me.

But then something quite unexpected happened. With hindsight I can now see that there had already been some hints that not all was as it should be. First, veteran e-mailers and skywriters had warned me that I ought to restrict my contributions to the “moderated” groups. (Most of the subjects discussed on the Net — including physics, mathematics, philosophy, language, artificial intelligence, and so on — have, respectively, both a moderated and an unmoderated group.) I ignored these warnings because postings to the moderated groups are first filtered through a moderator, who reads all the candidate articles and then posts only those he judges to be of value. I reasoned that I could make that judgment for myself — one keystroke will jettison any piece of skywriting that does not interest you — and that “moderation” certainly isn’t worth the huge backward step toward the old technology that the delays and bottle-necking would entail. And indeed the moderated groups carry much less material and their exchanges are a good deal more sluggish than the unmoderated ones, which seem to be as “live” and spontaneous as direct e-mail (but with the added virtue of appearing in the sky for all to see and contribute to).

Apart from the warnings of the veterans, other harbingers of cloudier horizons had been the low quality of many of the responses to my postings, and the undeniable fact that some of them were distinctly unscholarly, in fact, downright rude. No matter. I’m thick-skinned, I reasoned, and perfectly able and willing to exercise my own selectivity solo, in exchange for the vast potential of unmoderated skywriting.

Then it happened. In response to a rather minor posting of mine, joining what was apparently a long-standing exchange (on whether or not linguistic gender plays a causal role in social discrimination), there suddenly appeared such an astonishing string of coprolalic abuse (the lion’s share not directed at me, but at some other poor unfortunate who had contributed to earlier phases of the exchange) that I was convinced some disturbed or malicious individual had gained illicit access to someone else’s computer account. I posted a stately response about how steps must be taken to prevent such abuses of the Net and, much to my surprise, the reaction was a torrent of echo-coprolalia from all directions, posted (it’s hard to judge in this medium whether it was with a straight face) under the guise of defending free speech. For several weeks the Net looked like a global graffiti board, with my name in the center.

The veteran fliers told me they’d told me so; that the Net was in reality a haven for student pranksters and borderline personalities, motherboard-bred, for whom the completely unconstrained nature of the unmoderated groups represents an irresistible medium for acting out. Moreover, certain technical problems — chief among which was the unsolved “authentication” problem, namely, that there is no way to determine for sure who posted what, where — had made the Net not only virtually unregulable, but also, apparently, immune to defamation and libel laws.

My penchant for skywriting has taken quite a dive since this incident. I don’t relish what’s been happening with my name, for example, but I suppose the only way to have prevented it would have been to have stayed away from the Net altogether, hoping it might never occur to anyone to bring me up spontaneously. There’s an element of Gaussian Roulette in exposure to any of the media these days, no doubt. But before I wrote it all off as one of the ineluctable technological hazards of the age of Marshall McLunacy, I thought I’d post it with the old, land-based technology, to see whether anyone has any ideas about how to prevent the vast intellectual potential of skywriting from being done in by noise from the tail end of the normal distribution. If the Wright brothers’ invention were at stake, or Gutenberg’s, what would we do?

Stevan Harnad (c. 1987)

Extermination vs. Expropriation

No one has written an ethics/etiquette book on:

(1) How 15 million people, dispersed as a stateless and oppressed minority all over the planet for 2000 years, are supposed to react to having a third of their number systematically exterminated on the grounds of their race by various European states within one half-decade

(2) How 1.5 million other people, having nothing at all to do with that extermination, are supposed to react when the land they have been living in for 2000 years is expropriated and given as a state to the remainder of the exterminated people by the same European states that allowed (or helped) them to be exterminated

(3) How those of the exterminated people who emigrate to the expropriated state are supposed to react to the expropriated people, who form a fifth column within and around their expropriated state

(4) How either side is supposed to react after almost 60 years of ensuing bloody tit-for-tat vendettas

My guess is that the ethics/etiquette book for such a case has not been written because the case is unique, tragic, and no one knows what right or wrong is, or what to do about it. Onlookers simply fixate selectively on the injustices and atrocities (on either side) that affect or disturb them most. And, as usual, they offer criticism and solutions without having the responsibility of testing whether they will really work, or of suffering the consequences if they do not.

2006-02-23 Wiesel Words on Creed, Credulity and Culture

On Leon Wieseltier on Dan Dennett on Voodoo:

Whoops, creed crunched! But anyone reading this review has enough face-valid evidence, plus excerpted text, to see that LW’s words contradict themselves and in fact offer no alternative at all, other than grumbling and no small dose of hysteria and spleen! I have not read Dan’s book, but the obvious rebuttals to all of LW’s points (except two to which I will return) pop up immediately, as soon as one reads LW’s wiesel-words! There’s space between the two “bearded” extremes? Spare me! That’s like space between True and False (does LW think there’s some probabilistic wiggle-room in there? the existence of the immaterial/immortal soul [T/F?]; the existence of god(s) [T/F?]; any of the other supernatural smorgasbords served up by human wit across the ages [T/F?]).

But, to consider only LW’s two substantive points: (1) DD’s selective quote from Darwin left out Darwin’s personal creed? I can’t mind-read, let alone mind-read a deceased brain, but my own guess is that either that was Darwin exercising some Galilean diplomacy, or CD himself did not quite grasp where his bright lights were shining! (Either way, who cares? This is about truth, not about authority, or personal credos: otherwise we’d be committing the intentional fallacy — that propositions don’t mean what they mean, but only what their drafters meant them to mean…) Ditto for Hume.

LW’s other point, that voodoo is not to be taken literally but metaphorically: What the Dickens is that supposed to mean? Apart from the irrelevant cultural point that religion can inspire or even be art (Psalms, Bach, Blake, Boticelli) (so what? art is not about truth value but aesthetic value), what literal point is being made in pointing out this truism? Far more often voodoo is or inspires atrocities and abominations — today’s latest happening to be tit-for-tat shrine bombings!

As to the tautological status of saying that human mentation is biologically based even in its degrees of freedom: what alternative does LW have in mind? Original sin? Divine inspiration? Free Will? To say that all human doings originate from and are constrained by biology is to say no more nor less than that all human doings and sayings are constrained by cause/effect and the law of the excluded middle! Unless, of course, some of the Holy Writ to the contrary is (literally) true after all. In which case I suppose not just Biology, but Science and Logic are all up for grabs…

On LW on Philosophy — the less said, the better!

Publish or Perish

As Science is mere structured common sense,
her means but trial-and-error made intense,
the only virtue setting her apart,
and raising her above (some think) mere Art,
    Is her convergence ever on consensus:
    collective, self-corrective her defenses.
A flagellant, she boldly does defy
Reality her schemes to falsify.

And yet this noble jousting were in vain,
and all this pain would yield no grain of gain
    if Science were content, a shrinking violet,
    her works from all the world e‘er to keep private.
    Instead, performance public and artistic,
    restraining all propensities autistic,
perhaps less out of error-making dread,
than banal need to earn her daily bread.

For showbiz being what it is today,
work’s not enough, you’ve got to make it pay.
    What ratings, sweeps and polls count for our actors,
    no less than our elected benefactors,
    for Science the commensurate equation
    is not just publication but citation.
The more your work is accessed, read and used,
the higher then is reckoned its just dues.
    Sounds crass, but there may be some consolation,
    where there’s still some residual motivation
to make a difference, not just make a fee:
the World Wide Web at last can make Science free.

Stevan Harnad

U.S. Defaults to Denmark

September 11 2001

Gershwin’s gay garish Gotham
today has joined the ranks
of Gaia’s tragicopoles,
London, Dresden, Gdansk

for evermore.

An unhallowed razor,
thrust,
so savagely,

into the apple`s core.

But please,

spare us the braying
of the semioticians of symmetry.

Let them stay huddled,
paretically,
in Zeno’s corner,
ruminating,
endlessly,
on the etymology and etiology
the means/ends mission statements
of “horror,” and “counterhorror,”
lateral, collateral, and full frontal,
the feudal bloodline
of our selfish genes,
even unto the Big Bang,

while we chew instead
on whether high-tech sociopathy
and low-tech superstition
were indeed always slated
to win the day,
eventually,
in life’s no-sin, no-sum
game
of Gaussian roulette.

Coda: Homage to William of Ockham
(Or, The Hazards of Passive Exposure To Involuntary Co-Martyrdom)
(Or, Trumping Pascal’s Wager)

our forebears had it right
the fewer gods the better
monody just undershot
the optimum by
one