Path: nntp.gmd.de!Germany.EU.net!howland.reston.ans.net!gatech!
newsxfer.itd.umich.edu!zip.eecs.umich.edu!caen!msunews!netnews.upenn.edu!
hodgkin.med.upenn.edu!blackman
From: black...@hodgkin.med.upenn.edu (David Blackman)
Newsgroups: comp.software.config-mgmt
Subject: Experiences with RCS/CVS?
Date: 3 Feb 1995 12:41:46 GMT
Organization: University of Pennsylvania
Lines: 12
Distribution: world
Message-ID: <3gt8aa$qk6@netnews.upenn.edu>
NNTP-Posting-Host: hodgkin.med.upenn.edu

We're considering alternatives to SCCS to manage our source code. My task 
is to prepare a presentation on RCS.  Another staff member is doing the
same for CVS. I've read Walter Tichy's paper, "RCS--A System for Version 
Control," and I have access to a version of RCS to play around with. I'm 
interested in hearing RCS & CVS user's opinions and experience, or any other
suggestions. Can anyone help?

-- 
David Blackman
black...@hodgkin.med.upenn.edu
(215) 572-1141

Path: nntp.gmd.de!Germany.EU.net!wizard.pn.com!satisfied.elf.com!
news.mathworks.com!uhog.mit.edu!bloom-beacon.mit.edu!gatech!
howland.reston.ans.net!news.sprintlink.net!news.rain.org!coyote.rain.org!
not-for-mail
From: l...@rain.org (Pierre Asselin)
Newsgroups: comp.software.config-mgmt
Subject: Re: Experiences with RCS/CVS?
Date: 5 Feb 1995 11:39:04 -0800
Organization: RAIN Public Access Internet (805) 967-RAIN
Lines: 26
Message-ID: <3h39go$550@coyote.rain.org>
References: <3gt8aa$qk6@netnews.upenn.edu>
Reply-To: p...@verano.sba.ca.us
NNTP-Posting-Host: coyote.rain.org
X-Newsreader: NN version 6.5.0 (NOV)

In <3gt8aa$...@netnews.upenn.edu>
black...@hodgkin.med.upenn.edu (David Blackman) writes:

>We're considering alternatives to SCCS to manage our source code. My task 
>is to prepare a presentation on RCS.  Another staff member is doing the
>same for CVS. I've read Walter Tichy's paper, "RCS--A System for Version 
>Control," and I have access to a version of RCS to play around with. I'm 
>interested in hearing RCS & CVS user's opinions and experience, or any other
>suggestions. Can anyone help?

Here's what I found:

    1) RCS is easy to use.
    2) CVS has a steep learning curve for the administrator,
       and a few tricks to figure out for the users.
    3) I wouldn't leave home without 'em.

I started using CVS for source management and the CMU depot for the
installed files, both at work and at home.  Works great.  With CVS, I
can compile freebies from the net with whatever changes are required
to install locally, and keep things straight when patches show up.
Mostly.
-- 
--Pierre Asselin, Santa Barbara, California
  p...@verano.sba.ca.us
  l...@rain.org

Path: nntp.gmd.de!news.rwth-aachen.de!newsserver.rrzn.uni-hannover.de!
aix11.hrz.uni-oldenburg.de!uniol!zib-berlin.de!news.mathworks.com!
newshost.marcam.com!charnel.ecst.csuchico.edu!psgrain!rainrgnews0!
news.teleport.com!usenet
From: d...@atlas.com (Dan Thurman)
Newsgroups: comp.software.config-mgmt
Subject: Re: Experiences with RCS/CVS?
Date: 11 Feb 1995 23:26:02 GMT
Organization: Atlas Telecom
Lines: 146
Distribution: usa
Message-ID: <3hjh2a$d1t@desiree.teleport.com>
References: <3gt8aa$qk6@netnews.upenn.edu>
Reply-To: d...@atlas.com
NNTP-Posting-Host: ip-pdx4-01.teleport.com

	Please excuse me if this is an inappropriate response to your question
	but I'm trying to hit two birds with one stone.  First, because I didnt
	like tke "copying" model used by CVS,  I'm hoping that CVS creators might
	look into the following symbolic-links concept and perhaps incorporate
	these ideas into CVS?  Dunno... It is something worth looking into IMHO.



	I have chosed to use RCS,  because I have looked into all the commercial
	vendors, and free software (RCS, CVS) and had decided that RCS was
	sufficient for our purposes.  The following factors weighed in our
	decision to use RCS:

		1. Cost
		2. Trust in a vendor (reputation)
		3. Flexibility to change.
		   New platforms, new environments, ability of archive product
		   to adapt to changing development needs

	Since I wanted COMPLETE control over the archive-product and keep cost
	down,  we were better off to write our own tools since we can continually
	adapt to changing development needs.  The biggest problems with a vendor,
	is that they *may not* be able to port their tool to a platform we needed,
	and even if they did,  we'd have to pay for it.  Each and every time!

	I knew that I'd have to come up with a model, best suited for my company.
	It means that I'd have to create new tools, scripts, gui-interfaces,
	whatever was needed to help users with source-control and configuration 
	management issues.

	I have chosen not to use CVS, because I didn't like the "copy archive to 
	target" model used to conserve disk space.  Read on.

	If I remember it correctly,  the model used by CVS,  is to COPY
	all or parts of the SOURCE ARCHIVE, to the local directory where
	development or work is to take place.  It does absolutely nothing
	to conserve disk space,  i.e. no use of SYMBOLIC links into the
	SOURCE ARCHIVE.  For LARGE projects,  this was totally unacceptable
	given limited disk resources.  Even if there were unlimited disk resources
	it was an inefficient use of disk space.

	The model I have come up with,  uses symbolic links, carefully.  Yes,
	there is a limit to symbolic link depth, and there is the "danger" of
	"dangling links" or links that point to itself (a loop),  but we have
	for three years been able to avoid these pitfalls because I was constantly
	aware of them and wrote good programs that specifically did so.  Anyway,
	continuing,  the model we use is as follows:

				      SOURCE
			  	     ARCHIVES
				       ROOT
         +--------+----------+----------+-------+-----------+----...----+
         |        |          |          |       |           |           |
	 p1       p2         p3         p4      p5          p6   ...    pN
						| 
	+---------------+-----------------------+--------------------+
	|		|                       |                    |
       RCS	+---------------+       +---------------+       +---------------+
		|		|	|		|	|		|
	       RCS             FOO     RCS             GOO     RCS             HOO
				|			|			|
			       RCS		       RCS		       RCS


	Clearly,  every directory has an RCS sub-directory.  The TOP-LEVEL
	product directory (p4) contains at least one "master makefile", 
	which controls the entire build process underneath it.  Makefiles are
	the main controllers of sources, dependencies, check-outs, labeling of
	symbolic-names, and so on.  Symbolic labels, we use are special
	for releases,  whereas all other labels are permitted.  Strictly
	enforced.  I also keep a "label history" into a flat file for protection.


	Using the above model,  and taking this one step further...  the developer
	can use a perl-script (developed internally) called "cl" as follows:

			cl -s /srcs/p4 -d p4_v1.5

	What this script does, is to create a CLONE of /srcs/p4, and creates
	a new cloned directory tree called "p4_v1.5".

	The clone script creates clones as follows:

		1. All directories and sub-directories, EXCEPT RCS directories,
		   are duplicated as real directories in the user's area.

		2. RCS sub-directories are SYMBOLICALLY LINKED back into
		   their repective locations in /srcs/p4.

		3. Symbolic links are duplicated where found.
		   I usually discourage the practice to create symbolic links
		   in the SOURCE ARCHIVES,  but have not entirely ruled them out,
		   as there are a FEW good ways to use them.

	There are MANY options in our clone script,  which does updates, 
	link-checking, and other operations too long to detail here.  But
	the point is, it's function is to create and maintain clones.

	At this point,  we have in the user's area, a clone of /srcs/p4 called
	by a new directory name: p4_v1.5.  A builder,  would simply "cd"
	into the new clone tree, and issue "make release".  You'd think...
	sheesh... but there is NOTHING there but empty directories and symbolic
	links!  You'd also realize that for 5000 directories and links takes a
	total space of only ~5MB.  Anyway, with GNU-make,  which supports RCS,
	it "knows" how to find a makefile,  check it out, and then fire-off
	the entire build.  Of course all of this hinges on good maintenance
	of makefiles, and possibly of the labels (if used) in the RCS archives.

	Since cloning is available,  I also use the following method for
	multiple-users doing development on a common product this way:

		1. I have an automatic program (a script), which creates
		   a cloned tree from the SOURCE archives,  in which a
		   build is automatically started during business after-hours.

		2. Product developers or maintainers, come in the morning
		   and start their day,  by cloning off the built product
		   build, and off they go...

	Of course,  depending on what they are working on,  they at most have
	to do an incremental build.  The worst case build is an entire build
	of the tree they had cloned to.

	This model allows for independent development,  makes best use of
	disk resources,  allows for rapid development,  maximizes when possible,
	compile-time turnaround.

	RCS, typically does not allow users to edit files that have been "locked"
	and CVS will allow it, and attempt to merge the changes in.  This is
	one feature I like but I dont really need it.  I think that locked files
	forces development engineers to communicate,  so they are working together,
	rather than piling up their modifications "on top" of each other.

	My model has been very good to me and my company.  We have a large
	number of platforms, users, and source-files (one product contains
	over 5000 files),  and we can go forward or backwards in versions,
	and perform patches of previously released versions.


---
Dan Thurman,  Atlas Telecom, 4640 SW Macadam Ave., Portland, OR 97201
 Home : d...@teleport.com     WwWwWwWwWwWwW  Work : d...@atlas.com
 Voice: [USA] 1+503.645.8631  (   O v O   )  Voice: [USA] 1+503.228.1400 x251
 Fax  : [USA] 1+503.531.9353     (  O  )     Fax  : [USA] 1+503.228.0368

Path: nntp.gmd.de!news.rwth-aachen.de!newsserver.rrzn.uni-hannover.de!
aix11.hrz.uni-oldenburg.de!uniol!zib-berlin.de!news.mathworks.com!hookup!
swrinde!cs.utexas.edu!news.sprintlink.net!internex.net!hell!seiwald
From: seiw...@hell.uucp (Christopher Seiwald)
Newsgroups: comp.software.config-mgmt
Subject: Re: Experiences with RCS/CVS?
Date: 13 Feb 95 05:38:24 GMT
Organization: InterNex Information Services, Inc.
Lines: 35
Distribution: usa
Message-ID: <seiwald.792653904@hell>
References: <3gt8aa$qk6@netnews.upenn.edu> <3hjh2a$d1t@desiree.teleport.com>
NNTP-Posting-Host: hell.tea.org

d...@atlas.com (Dan Thurman) writes:

> If I remember it correctly,  the model used by CVS,  is to COPY
> all or parts of the SOURCE ARCHIVE, to the local directory where
> development or work is to take place.  It does absolutely nothing
> to conserve disk space,  i.e. no use of SYMBOLIC links into the
> SOURCE ARCHIVE.  For LARGE projects,  this was totally unacceptable
> given limited disk resources.  Even if there were unlimited disk resources
> it was an inefficient use of disk space.

I assume the last statement is hyperbole: if you have an unlimited resource,
it is hard to use it inefficiently.  But about the previous statement:
how much disk space are you saving?  And how much is it worth?  I've
found that even with a large number of files (12k?) and a large number
of developers (~100), the use of symbolic links may be a loser.  Aside
from the usual semantic confusion introduced by symbolic links, it also
means that all your developers are pounding the same server.  This is
mildly bad when the server gets loaded, and downright miserable when the
server goes down.

The CVS copy/modify/merge model means users are disconnected from the
SCM resource most of the time, and consequently insulated from it.

If you save 100MB per user with symbolic links, you're talking a $50
capital investment nowadays.  That's about the worth of 1 hour of that
user's time.  How often has that hour been wasted tracking down symbolic
link funniness or waiting for the server to come back up?

I contend that these days disk space is effectively limitless for
source management.  The burden is the management of the files and their
revisions, and symbolic links don't make that any easier.

My $2e-2.

Christopher

Newsgroups: comp.software.config-mgmt
From: m...@spclmway.demon.co.uk (Mark Bools)
Path: nntp.gmd.de!newsserver.jvnc.net!howland.reston.ans.net!
news.sprintlink.net!peernews.demon.co.uk!spclmway.demon.co.uk!mab
Subject: Re: Experiences with RCS/CVS?
Distribution: usa
References: <3gt8aa$qk6@netnews.upenn.edu> <3hjh2a$d1t@desiree.teleport.com> 
<seiwald.792653904@hell>
Organization: Siemens Traffic Controls Limited, UK
Reply-To: m...@spclmway.demon.co.uk
X-Newsreader: Demon Internet Simple News v1.27
Lines: 35
X-Posting-Host: spclmway.demon.co.uk
Date: Thu, 16 Feb 1995 10:07:41 +0000
Message-ID: <792929261snz@spclmway.demon.co.uk>
Sender: use...@demon.co.uk

Hmmm!  We use the copy/modify/merge model for controlling source CI but
for derived CI we use symbolic links of one form or another (UNIX , VMS and
PC platforms - so symbolic link is probably being broadly interpretted here!)
Basically, because derived objects tend to be more stable in our system - 
produced by the build manager (users workspace is their problem) these
items are provided on an open access basis to project team members.  Keep
It Simple is my motto, each system build is released to the engineers in
such a way that they can *see* the derived objects from their workspace
without needing to copy stuff about (all the tools they use loacally can
*see* these objects too).  If they want to modify the sources, compile,
relink etc.  They can copy/modify the sources to their workspace do all
the interesting stuff using the provided derived objects where necessary
with all their local workspace *shadowing* the derived objects pool
provided by the build manager.  Once these changes are tested they are
returned to the source control system and life goes on....

Net result: Benefits to the user, derived objects easily available and
        since these are quite often resource intensive top produce it
        saves them time and me disk space.
        Benefits to the CM system, reduced disk usage, users are not
        constantly reproducing the derived objects that they only need to
        link against (etc).  The users source changes are isolated and the
        source management tool does not have too keep track of the whims of
        users, and I do not have to track down errors due to weird and
        complex sym.links.

Having said all that my system is probably much simpler than some of yours.
I do not have to deal with massively distributed development teams, all
our guys work on the same site.

-- 
Mark Bools

- You guessed it.  All opinions expressed herein are my own and in no way
reflect the policy or opinion of SPCL.

Path: nntp.gmd.de!stern.fokus.gmd.de!ceres.fokus.gmd.de!zib-berlin.de!
news.mathworks.com!panix!bloom-beacon.mit.edu!apollo.hp.com!lf.hp.com!
hpscit.sc.hp.com!news.dtc.hp.com!col.hp.com!sony!nntp-sc.barrnet.net!
news.fujitsu.com!amdahl.com!pacbell.com!ico.net!lowell.bellcore.com!
nntp-ara!geoff
From: ge...@wodehouse.bellcore.com (Geoffrey M Clemm)
Newsgroups: comp.software.config-mgmt
Subject: Re: Experiences with RCS/CVS?
Date: 17 Feb 1995 04:10:07 GMT
Organization: Bellcore
Lines: 65
Distribution: usa
Message-ID: <GEOFF.95Feb16231007@wodehouse.bellcore.com>
References: <3gt8aa$qk6@netnews.upenn.edu> <3hjh2a$d1t@desiree.teleport.com>
	<seiwald.792653904@hell> <792929261snz@spclmway.demon.co.uk>
NNTP-Posting-Host: wodehouse.bellcore.com
In-reply-to: mab@spclmway.demon.co.uk's message of Thu, 16 Feb 1995 10:07:41 +0000


Various potentially interesting statements are made in Mark's article,
but there is not enough detail to determine what is really being done.
In particular, the following questions come to mind:

In article <792929261...@spclmway.demon.co.uk> m...@spclmway.demon.co.uk 
(Mark Bools) writes:
   Hmmm!  We use the copy/modify/merge model for controlling source CI but

What is "CI" ... configuration item ? 

   for derived CI we use symbolic links of one form or another (UNIX , VMS and
   PC platforms - so symbolic link is probably being broadly interpretted here!)

I'm not sure how to "broadly interpret" the term symbolic link.  It's a
well defined term in Unix systems - what is the meaning in VMS and Unix,
if it is not the Unix kind ?

   Basically, because derived objects tend to be more stable in our system - 

More stable than what?  They certainly can't be more stable than the
source code from which they are derived.

   produced by the build manager (users workspace is their problem)

In what sense is the user's workspace "their problem"?  If you don't
support versioning and derived objects in the workspaces, then you're
not doing much for your users.

   these
   items are provided on an open access basis to project team members.

What does "provided on an open basis" mean?

   Keep
   It Simple is my motto, each system build is released to the engineers in
   such a way that they can *see* the derived objects from their workspace
   without needing to copy stuff about (all the tools they use loacally can
   *see* these objects too).

What exactly does it mean to say that they "see the derived objects from
their workspace" ?

   If they want to modify the sources, compile,
   relink etc.  They can copy/modify the sources to their workspace do all
   the interesting stuff using the provided derived objects where necessary
   with all their local workspace *shadowing* the derived objects pool
   provided by the build manager.

OK, you copy over a .h file that affects a variety of .o's in a variety of
libraries.  How does your system know when it can use derived files from
the derived objects pool provided by the build manager, and when it has
to build new ones for the user?  Viewpathing of some kind?

   Once these changes are tested they are
   returned to the source control system and life goes on....

So what determines which users see these changes, and when?
When a change is returned to the source control system, does this
invalidate the now out-of-date objects in the derived file pool?

Cheers,

Geoff
--
ge...@bellcore.com

Path: nntp.gmd.de!stern.fokus.gmd.de!ceres.fokus.gmd.de!zib-berlin.de!
news.mathworks.com!news.alpha.net!uwm.edu!cs.utexas.edu!swrinde!pipex!
sunic!seunet!seunet!news2.swip.net!enea.se!not-for-mail
From: som...@enea.se (Erland Sommarskog)
Newsgroups: comp.software.config-mgmt
Subject: Cost of HW/SW and cost of man power
Date: 18 Feb 1995 11:47:54 +0100
Organization: Foresta dell'estate
Lines: 35
Message-ID: <3i4j8q$45f@gordon.enea.se>
References: <3gt8aa$qk6@netnews.upenn.edu> <3hjh2a$d1t@desiree.teleport.com>
NNTP-Posting-Host: gordon.enea.se

Dan Thurman (d...@atlas.com) writes:
>I have chosed to use RCS,  because I have looked into all the commercial
>vendors, and free software (RCS, CVS) and had decided that RCS was
>sufficient for our purposes.  The following factors weighed in our
>decision to use RCS:
>
>		1. Cost

So far so good.

>Since I wanted COMPLETE control over the archive-product and keep cost
>down,  we were better off to write our own tools since we can continually
>adapt to changing development needs.  The biggest problems with a vendor,
>is that they *may not* be able to port their tool to a platform we needed,
>and even if they did,  we'd have to pay for it.  Each and every time!
>
>I knew that I'd have to come up with a model, best suited for my company.
>It means that I'd have to create new tools, scripts, gui-interfaces,
>...
>For LARGE projects,  this was totally unacceptable
>given limited disk resources.  Even if there were unlimited disk resources
>it was an inefficient use of disk space.

Interesting. This is the philosophy where software and hardware
from vendors are expensive, but the time of your own staff have
a very little cost. I realize that this is actually true at many
universities, but I'm surprised to see this conception displayed
from a commercial company.

Yes, there are situations where it might be better to build your
own tools than buying them from a vendor, because that's the only 
way you can get them tailor-made. But I am very skeptic that *cost* 
would be an argument for writing your own tool.
-- 
Erland Sommarskog, som...@enea.se, Stockholm

Path: nntp.gmd.de!stern.fokus.gmd.de!ceres.fokus.gmd.de!zib-berlin.de!
news.mathworks.com!newshost.marcam.com!charnel.ecst.csuchico.edu!psgrain!
rainrgnews0!news.teleport.com!usenet
From: d...@atlas.com (Dan Thurman)
Newsgroups: comp.software.config-mgmt
Subject: Re: Experiences with RCS/CVS?
Date: 19 Feb 1995 01:43:12 GMT
Organization: Atlas Telecom
Lines: 88
Distribution: world
Message-ID: <3i67nh$4ep@desiree.teleport.com>
References: <seiwald.792653904@hell>
Reply-To: d...@atlas.com
NNTP-Posting-Host: ip-pdx4-29.teleport.com

In article 792653904@hell, seiw...@hell.uucp (Christopher Seiwald) writes:
>d...@atlas.com (Dan Thurman) writes:
>
>> If I remember it correctly,  the model used by CVS,  is to COPY
>> all or parts of the SOURCE ARCHIVE, to the local directory where
>> development or work is to take place.  It does absolutely nothing
>> to conserve disk space,  i.e. no use of SYMBOLIC links into the
>> SOURCE ARCHIVE.  For LARGE projects,  this was totally unacceptable
>> given limited disk resources.  Even if there were unlimited disk resources
>> it was an inefficient use of disk space.
>
>I assume the last statement is hyperbole: if you have an unlimited resource,
>it is hard to use it inefficiently.  But about the previous statement:
>how much disk space are you saving?  And how much is it worth?  I've
>found that even with a large number of files (12k?) and a large number
>of developers (~100), the use of symbolic links may be a loser.  Aside
>from the usual semantic confusion introduced by symbolic links, it also
>means that all your developers are pounding the same server.  This is
>mildly bad when the server gets loaded, and downright miserable when the
>server goes down.
>
	Snip!
>
   Well,  this is a problem, if you dont effieciently manage your master source
   fileservers in a way that allows for a slave source fileserver containing
   a mirror of the master source fileserver to take over if the master source
   fileserver should drop dead.  It is possible to balance the loading of
   your fileservers (master or slave) given the load of the particular fileserver
   in use.  There are several methods to handle the problems you're referring to,
   but this is not the issue I am debating here.  I can understand "mission-critical"
   situations that call for it,  but smaller to mid-size companies cannot afford 
   "mission-critial" senarios given the hardware required to support it.  Let's
   get real here...  we're not "NASA"...
   
   We have one (of many) product build that take up to 14 hours to build,  and
   when it is all finished, the build occupies close to 700MB.  If you use the CVS
   method... oh boy...  that can be real time consuming and terrinble waste of 
   time/effieciency if several engineers are working in parallel, and each perform
   thier own 14 hour builds before actually starting development work.  You
   have to understand, that in our particular case,  we use LOTS of shared 
   libraries,  database files (Oracle based), and many sub-modules in which most
   of our developers do thier work in.

   So this is why the CVS model isn't for us.  As I explained in my posting,
   we have a build are where developers can clone to, and do development work.
   This method allows for parallel development by any number of engineers,
   whether they are using it on the local system or via an NFS mounted filesystem.
   It is also quite cheap to implement ;-)

   You say: we are pounding our fileservers... nach...  they are only used
   when sources are needed so in reality this is a small issue for us.  We even
   minimized the "all-at-once" massive check-outs as our build process only
   checks out the needed file just prior to compilation.  This is done by
   our makefiles, that it incrementally checks out a single or set of files
   per compliation.  So,  the actual NFS traffic is quite neglible! (We actually
   checked this!)  Whatever CMS you use, the sources have to be checked out at
   least once anyway.

   Our build platforms do most of the work, compiling and depositing data on
   thier LOCAL platform drives,  and not necessarily through NFS...  the source
   server is only used for checking in/out sources files,  whether it is for
   building a product release or some individual doing development work.  We
   are well aware of NFS issues,  and we dont move massive amounts of platform
   compiled data via NFS...  we only move the sources!

   We are quite proud of our methods, and it serves us quite well.  I guess
   you'd have to try it yourself too see it's benefits.  We actually tried
   to give the CVS system the benefit of doubt, and it is OK for small and
   certain situations,  but is NOT ok for our particular applications.

	MY $2-e2! (ha)
>
        Snip!
>
>My $2e-2.
>
>Christopher




---
Dan Thurman,  Atlas Telecom, 4640 SW Macadam Ave., Portland, OR 97201
 Home : d...@teleport.com     WwWwWwWwWwWwW  Work : d...@atlas.com
 Voice: [USA] 1+503.645.8631  (   O v O   )  Voice: [USA] 1+503.228.1400 x251
 Fax  : [USA] 1+503.531.9353     (  O  )     Fax  : [USA] 1+503.228.0368

Newsgroups: comp.software.config-mgmt
From: m...@spclmway.demon.co.uk (Mark Bools)
Path: nntp.gmd.de!newsserver.jvnc.net!nntpserver.pppl.gov!princeton!udel!
news.sprintlink.net!howland.reston.ans.net!pipex!peernews.demon.co.uk!
spclmway.demon.co.uk!mab
Subject: Re: Experiences with RCS/CVS?
Distribution: usa
References: <3gt8aa$qk6@netnews.upenn.edu> <3hjh2a$d1t@desiree.teleport.com> 
<GEOFF.95Feb16231007@wodehouse.bellcore.com>
Organization: Siemens Traffic Controls Limited, UK
Reply-To: m...@spclmway.demon.co.uk
X-Newsreader: Demon Internet Simple News v1.27
Lines: 148
X-Posting-Host: spclmway.demon.co.uk
Date: Mon, 20 Feb 1995 15:44:34 +0000
Message-ID: <793295074snz@spclmway.demon.co.uk>
Sender: use...@demon.co.uk

In article <GEOFF.95Feb16231...@wodehouse.bellcore.com>
           ge...@wodehouse.bellcore.com "Geoffrey M Clemm" writes:

> Various potentially interesting statements are made in Mark's article,
> but there is not enough detail to determine what is really being done.

 Sorry Geoff I seem to be having a problem with defining my terms (probably
 'cos I'm new to this newsgroup business, and my CM experience has been
 rather insulated until recently).

> In particular, the following questions come to mind:
> 
> In article <792929261...@spclmway.demon.co.uk> m...@spclmway.demon.co.uk (Mark
>  Bools) writes:
>    Hmmm!  We use the copy/modify/merge model for controlling source CI but
> 
> What is "CI" ... configuration item ? 

 Yes.

>    for derived CI we use symbolic links of one form or another (UNIX , VMS and
>    PC platforms - so symbolic link is probably being broadly interpretted here!
> )
> 
> I'm not sure how to "broadly interpret" the term symbolic link.  It's a
> well defined term in Unix systems - what is the meaning in VMS and Unix,
> if it is not the Unix kind ?

 Having only just started to handle a UNIX CM environment symbolic links are
 new stuff for me (I come from a VMS environment).  As I understand it a 
 symbolic link simply hooks one part of a UNIX file system to another in
 a transparent fashion.  So a user looking as a subdirectory of their own
 workspace may be looking at a directory created by themselves for a complete
 filing system mounted on a seperate device or even seperate machine.  The
 bottom line being that they do not need to know.

 Assuming I have that straight - please let me know if I don't!

 VMS does not have a direct equivalent.  You can create similar effects using
 logical search paths though.  So a logical SYSTEM_BUILD: may refer to one
 directory or many directories in a search path.  In our VMS environment we
 have it rigged so a user can set their current default directory to a search
 list.  This gives a similar effect to UNIX symbolic links but is very much
 more restricted.

 All I am really interested in is making the system build derived objects
 available to the user in such a way that they do not need to *know* whether
 the derived object has been created locally of in the system build.  They
 use an MMS file (similar to MAKE) and it works out whether they have changed
 any files locally which will need a rebuild.

> 
>    Basically, because derived objects tend to be more stable in our system - 
> 
> More stable than what?  They certainly can't be more stable than the
> source code from which they are derived.

 Sorry can't help on this one, I haven't got a clue what I was on about.

> 
>    produced by the build manager (users workspace is their problem)
> 
> In what sense is the user's workspace "their problem"?  If you don't
> support versioning and derived objects in the workspaces, then you're
> not doing much for your users.

 Versioning is all handled by CVS (UNIX/PCs) or CMS(VMS), derived objects
 are managed in build pools (when a build is done all the derived objects
 are placed in a directory tree, this I call the build pool.  A user can 
 refer to the build pool using a logical UTC_SYSBLD - we have a utility which
 sets up all the necessary links based on a Project and Build Name).  Their 
 workspace is not controlled in anyway, they are free to make as much mess 
 as they like.  We control the *door to the vault*, preventing them from 
 putting their mess back into the control library.

> 
>    these
>    items are provided on an open access basis to project team members.
> 
> What does "provided on an open basis" mean?

 We do not prevent anyone reading the build pool of derived objects.

> 
>    Keep
>    It Simple is my motto, each system build is released to the engineers in
>    such a way that they can *see* the derived objects from their workspace
>    without needing to copy stuff about (all the tools they use loacally can
>    *see* these objects too).
> 
> What exactly does it mean to say that they "see the derived objects from
> their workspace" ?

 I think this is revisiting the symbolic links thing. We have supplied a 
 simple utility which, given a project and build name, will set up all
 the links necessary for them to build against a specific set of derived
 objects.

> 
>    If they want to modify the sources, compile,
>    relink etc.  They can copy/modify the sources to their workspace do all
>    the interesting stuff using the provided derived objects where necessary
>    with all their local workspace *shadowing* the derived objects pool
>    provided by the build manager.
> 
> OK, you copy over a .h file that affects a variety of .o's in a variety of
> libraries.  How does your system know when it can use derived files from
> the derived objects pool provided by the build manager, and when it has
> to build new ones for the user?  Viewpathing of some kind?
 
 Yes.  Again this is the symbolic link thing.  Sorry, I seem to have
 really confused things.  I assume you are coming from a ClearCase
 perspective?  As I understand it the views into a VOB gives you users 
 dynamic control over precisely what they see in a given view.  Our VMS system
 is not this flexible.  The *view* is set by the user connecting to a build
 pool which contains the derived objects for that build.  The CMS library
 is linked in in such a way that the various tools the engineer will use can
 compare sources in their current directory to those in the CMS library
 which were used to produce the build pool.

>    Once these changes are tested they are
>    returned to the source control system and life goes on....
> 
> So what determines which users see these changes, and when?

 Once a build has been done the pool is made available to the project.
 A utility allows them to connect to the pool (similar, but less flexible,
 to your viewpath).

> When a change is returned to the source control system, does this
> invalidate the now out-of-date objects in the derived file pool?

  No.  The derived pool is generated from a known baseline within the
  control system.  Everyone works from this known baseline until another
  baseline is issued (hence a new build pool is made available).


  Boy this is really hard.  Translating from a homegrown VMS system to
  a commercial product on UNIX.  I'll try and get a simple diagram together,
  we may find we are talking about the same things but using different
  terminology.

-- 
Mark Bools

    As usual, all opinions expressed eherein are my own and in no way
    reflect the opinions or policies of SPCL.

Path: bga.com!news.sprintlink.net!howland.reston.ans.net!gatech!udel!
news.mathworks.com!newshost.marcam.com!charnel.ecst.csuchico.edu!psgrain!
news.teleport.com!usenet
From: d...@atlas.com (Dan Thurman)
Newsgroups: comp.software.config-mgmt
Subject: Re: Cost of HW/SW and cost of man power
Date: 28 Feb 1995 21:17:21 GMT
Organization: Atlas Telecom
Lines: 100
Message-ID: <3j03t1$as6@desiree.teleport.com>
References: <3ihu4k$65m@newsbf02.news.aol.com>
Reply-To: d...@atlas.com
NNTP-Posting-Host: ip-pdx4-29.teleport.com

	(snip!)
		1. Cost
	(snip!)

-=-=-=-=-=-=-=-=-=-=-=-=-=-

	Oh...  I missed this thread...  Guess it was taken from a previous
	posting I made ;-)

	I had read several respondents to this article,  and understand
	what you all are saying....  but the real problem is it depends on the
	mindset and decisions of the management in their efforts to keep a
	company afloat.  There are things that go on in management that
	issues like SCMS or support of a (Multi-)platforms support may have
	to take a back seat at this time,  as there may be other critical
	"fires" that need to be put out.  Management has to see the "forest
	for the trees",  whereas engineers see their particular problems (trees)
	and may not necessarily understand why management made decisions
	that were not made in their favor at the time.  Management has to keep the
	company afloat first and direct cash flow where it is critical... and
	this depends on all the fires that come to their attention which may
	also affect their business decisions.  So...  My contention is that
	cost plays a huge difference if cash reserves are low or too critical
	to be taken lightly by management.  Of course,  cost isnt the *only*
	factor...  as I had listed these (I think) in my original posting.

	The common thing that happens on a startup or a small company, from
	what I have seen is that these issues (SCMS and (Multi-)platform support)
	are given very little time or attention in the development cycle as
	product development becomes the main focus.  At some point in the
	cycle,  usually later, management might then hire someone to help
	them out of the SCMS, (Multi-)platform support, and/or other
	issues if it is critical enough and if there is enough cash reserves
	left over to afford it.  By this time,  the company is growing,  there
	are more engineers involved, perhaps more machines... and the company
	has gone from a startup, to a small or mid-sized company.  At this
	point... this is usually where the help-person (SCME/SA/Eng.) comes in.
	
	After the "Help-person" learns about all the issues/problems involved
	for this particular company, and researching for the best SCMS
	and/or (Multi-)platform support package for the company in question,
	s/he will then put together a comprehensive portifolio for management
	to review.

	Listed in that portifolio,  includes vendors and their offerings, and
	the alternate methods (SCCS, RCS, other mixes of SCMS methods) used in
	actual practice...  Management is given the full scope of the range
	of the pros and cons of any business decision they make based on that
	portifolio.

	IMHO,  the following things comes to management's mind: 

		o It costs ($)$$$$.$$ dollars up front to support this
		  "great" package and this is the "best" there is and is the
		  "best-fit" for our company...  we can't afford it.
		o Vendor X charges ($)$$$.$$ for platform X, and ($)$$$.$$
		  for platform Y, ....  sheesh!  This is adding up fast!!
		o We've looked at the "lesser" offerings,  but it doesnt
		  have what we need...  worst, we are committing our sources
		  to this vendor and we can't jump from vendor X to a "better"
		  vendor Y without resorting to a painful and costly conversion...
		o We are comitting our sources to vendor X, and feel
		  that we may get ourselves into a jam since we dont have the 
		  sources...  can we trust this vendor to serve us?
		o Vendor X doesnt support Platform X...
		o Vendor X doesnt have a "do it all"  gui...
			.
			. Other details...
			.
		o We can take complete control of our SCMS/Development direction
		  by use the "free software" as a base and we can port over to most
		  platforms and we have total control over our future direction
		  so let's hire a person to take care of these issues.  Since we
		  have full control of our SW development direction...  we can
		  always get ourselves out of any problem issues since we have
		  full control.

	What's interesting is...  most companies take the very bottom decision
	since they look at (1) Cost (2) Control of present and future SW Development
	direction (3) Trust in thier own "help-person" to lead them.

	If cost were not a factor and vendor X does *everything* company X wants
	including (Multi-)platform support at a reasonable cost, then this issue
	becomes moot.

	Disclaimer:  Speaking for myself.  The above statements do not
		     necessarily apply to my current employer. Nor does
		     my current employer support my views.  My views are
		     mine and mine alone!


---
---------------------------_----------------------------------------------
Dan Thurman            __(o o)__    EMAIL1: d...@atlas.com (work)
Atlas Telecom          |w| U |w|    EMAIL2: d...@teleport.com
4640 SW Macadam Ave.   | | | | |    VOICE: [USA] 1+503.228.1400 x251
Portland, OR 97201     | | | | |    FAX:   [USA] 1+503.228.0368
--------------------------------------------------------------------------

			  SCO's Case Against IBM

November 12, 2003 - Jed Boal from Eyewitness News KSL 5 TV provides an
overview on SCO's case against IBM. Darl McBride, SCO's president and CEO,
talks about the lawsuit's impact and attacks. Jason Holt, student and 
Linux user, talks about the benefits of code availability and the merits 
of the SCO vs IBM lawsuit. See SCO vs IBM.

Note: The materials and information included in these Web pages are not to
be used for any other purpose other than private study, research, review
or criticism.