Path: utzoo!yunexus!geac!syntron!jtsv16!uunet!lll-winken!lll-tis!ames!mailrus!purdue!spaf
From: s...@cs.purdue.EDU (Gene Spafford)
Newsgroups: news.misc,news.admin
Subject: Some interesting news stats
Message-ID: <>
Date: 24 Oct 88 15:01:39 GMT
Article-I.D.: medusa.5200
Sender: n...@cs.purdue.EDU
Reply-To: (Gene Spafford)
Organization: Department of Computer Science, Purdue University
Lines: 91

I recently made a presentation on the Usenet and NNTP to the IETF
(Internet Engineering Task Force), and I pulled together a bunch of
stats about the Usenet.  To my knowledge, no one has ever done this
before, so I thought I'd also publish some of them here for your

The following numbers are derived from old postings and data from the
following people:  Henry Spencer, Steve Bellovin, Mark Horton, Rick
Adams, Brian Reid, and me.

Some observations:
1) Growth in sites.  The Usenet has been growing by an approximate
doubling (or better) each year.  (see below)

2) Growth in volume. The number of articles posted to the net has
approximately doubled each year.  The sum total of article sizes has
not been growing as quickly as the number of articles. That is, the
average article SIZE has decreased over time, but the article COUNT has
increased.  (see below)

3) Well over 1 million articles have been posted to the Usenet since
its origination in 1979.

4) If you make some very conservative assumptions about the cost of
operation of Usenet, you get some astonishing numbers.  Assume that
each of the estimated 11000 current Usenet sites spends approximately
$10 per day on Usenet -- communicattions charges, cpu time and disk
time.  Further assume that, on the average, each of the 303,000 (est.)
Usenet readers spends 20 minutes per day reading/posting news, at an
average hourly wage of $15 per hour per person (if they were working).
Then, the total cost of Usenet at its current size is $593,125,000 per
year!  Even if those numbers are off by a factor of 10 (doubtful),
those numbers are staggering!

5) Latest figures show that 97% of all articles reach the
well-connected sites within 72 hours.  Effectively, this means that
almost every site has a delay of at most 6 days before seeing a
posting, and most see articles within 3 days.  Approximately 82% of all
posted articles are available to the well-connected sites within 24
hours of posting (largely thanks to NNTP).

Some possible conclusions that can be drawn from this:

I) Volume has been increasing due to the addition of new sites and new
posters.  The trend with people who have been on the net for a while
seems to be to post less and post shorter articles.  The increase in
the number of newsgroups does not correlate well with the increase in
volume (although it may correlate well with postings going into the
wrong groups due to namespace pollution).

II) At the current rate of growth, Usenet will pass its 2nd millionth
message sometime in 1990.  By the end of that year, message traffic
would be approximately 8000 messages per day, exchanged by 50,000
Usenet sites.  I cannot conceive of that happening (although 2 years
ago I could not conceive of over 10K sites & 4Mb per day traffic,
either!); I suspect something will happen to break the network
up before then -- either due to internal pressure, or external
forces concerned about costs and traffic.  We already see this
happening with alternate distributions and the surge in mailing

III) I don't want to even try to speculate what will happen once
hypermedia Usenet becomes available....

Some year-by-year figures:

1979: 3 sites, 2 articles per day
1980: 15 sites, 10 articles per day
1981: Usenet described in Usenix conference -- sites invited to join.
      Notesfile system comes on-line and joins Usenet.
      This explains jump in sites, although postings remain low
      because groups are mostly technical & Unix-oriented and
      few "novice" users use the groups.
1981: about 150 sites, 20 articles per day
1982: about 400 sites, 35 articles per day
1982: Did 4.1 or 4.2 BSD come out around here?  That would explain
      the sudden jump in postings, I believe.
1983: over 600 sites, 120 articles per day
1984: over 900 sites, 225 articles per day
1985: over 1300 sites, 375 articles per day, 1Mb+ per day
1986: about 2500 sites, 500 articles per day, 2Mb+ per day
1986: NNTP introduced. MLZ compression in news 2.10.
1987: about 5500 sites, 1000 articles per day, 2.4Mb+ per day
10/1/1988: almost 11,000 sites, 1800 articles per day, 4Mb per day
Gene Spafford
NSF/Purdue/U of Florida  Software Engineering Research Center,
Dept. of Computer Sciences, Purdue University, W. Lafayette IN 47907-2004
Internet:	uucp:	...!{decwrl,gatech,ucbvax}!purdue!spaf

Path: utzoo!utgpu!water!watmath!looking!brad
From: b...@looking.UUCP (Brad Templeton)
Newsgroups: news.misc,news.admin
Subject: Re: Some interesting news stats
Message-ID: <2206@looking.UUCP>
Date: 25 Oct 88 02:20:49 GMT
References: <>
Reply-To: b...@looking.UUCP (Brad Templeton)
Organization: Looking Glass Software Ltd.
Lines: 34

The reason that the net has been able to grow at the rate it has can
be found by examining similar jumps in technology.

When the net started, articles were sent unbatched, uncompressed and
over 1200 bps modems.  There were some sites on the net using 300 bps

Over time the following factors have come into play:

A) 14,000 bps modems - 11.6 factor improvement
B) Data compression - 2.2 factor improvement
C) Batching - 1.2 factor improvement
D) Long Distance Rate reductions - 1.3 factor improvement?

Total improvment: over 40 times!

Thus from late 1982 (100K/day) to today (4 megabytes/day) USENET has
actually not grown in terms of the cost to handle a link!  That's
pretty astounding.

If you add in other things, like PC-Persuit, X.25 links, Internet links,
Bitnet links, StarGate and, in the future, ISDN we see an actual

Of course while the cost/link has stayed the same, the number of links
has increased, thus increasing the total cost of the net.  However, the
number of non-free links has not grown anywhere near the way the number
of sites has.  Most new sites are on free links -- things like SUNS,
personal workstations and at-home Unix boxes.

The one great cost that has gone up is the human time spent reading
waste of time postings.   (Like this one?)
Brad Templeton, Looking Glass Software Ltd.  --  Waterloo, Ontario 519/884-7473

Path: utzoo!utgpu!water!watmath!clyde!att!osu-cis!!mailrus!uflorida!gatech!ulysses!smb
From: (Steven Bellovin[jsw])
Newsgroups: news.misc,news.admin
Subject: Re: Some interesting news stats
Message-ID: <>
Date: 25 Oct 88 18:46:31 GMT
References: <> <2206@looking.UUCP>
Organization: AT&T Bell Laboratories, Murray Hill
Lines: 15

In article <2...@looking.UUCP>, b...@looking.UUCP (Brad Templeton) writes:
> When the net started, articles were sent unbatched, uncompressed and
> over 1200 bps modems.  There were some sites on the net using 300 bps
> modems.

In fact, when the net started 300 baud was the norm; very few sites had
1200 bps, since Vadic wasn't licensing their stuff very much, AT&T was
just starting to license theirs, and if you wanted a ``genuine Bell''
unit you had to rent it, at ~$40/mo.....

		--Steve Bellovin

Serious use of 1200 bps for netnews took at least a year, maybe even

Newsgroups: news.misc,news.admin
Path: utzoo!henry
From: he...@utzoo.uucp (Henry Spencer)
Subject: Re: Some interesting news stats
Message-ID: <1988Oct26.161329.4192@utzoo.uucp>
Organization: U of Toronto Zoology
References: <> <2206@looking.UUCP> <>
Date: Wed, 26 Oct 88 16:13:29 GMT

In article <> (Steven Bellovin[jsw]) writes:
>> When the net started, articles were sent unbatched, uncompressed and
>> over 1200 bps modems....
>In fact, when the net started 300 baud was the norm; very few sites had
>1200 bps...

In fact, it wasn't unheard-of for early news connections to be 300 baud
MANUALLY DIALED.  Boy, were we glad to get our first 1200-baud autodialing
The dream *IS* alive...         |    Henry Spencer at U of Toronto Zoology
but not at NASA.                |uunet!attcan!utzoo!henry

			  SCO's Case Against IBM

November 12, 2003 - Jed Boal from Eyewitness News KSL 5 TV provides an
overview on SCO's case against IBM. Darl McBride, SCO's president and CEO,
talks about the lawsuit's impact and attacks. Jason Holt, student and 
Linux user, talks about the benefits of code availability and the merits 
of the SCO vs IBM lawsuit. See SCO vs IBM.

Note: The materials and information included in these Web pages are not to
be used for any other purpose other than private study, research, review
or criticism.