by Eugene Volokh, VESOFT
             Published by SUPERGROUP Magazine, July 1988.
   Published in "Thoughts & Discourses on HP3000 Software", 4th ed.

   Everybody  wants system security; everybody agrees that it's a good
thing.  The  trick,  as  usual,  is to turn  good intentions into good
results  (a  worthwhile  point  to  remember  in this  election year).
Sometimes  an  apparently  sound  security system turns  out to have a
fatal  flaw  that not only leaves your  system insecure, but lulls you
into  a  false  sense  of security. Often these  flaws are not visible
until  it's  too  late;  often they are obscured  by some common myths
which everybody believes but which are very badly mistaken.


   Passwords  are the tried and true way of securing computer systems.
Someday,  perhaps, fingerprints and retina scans will take over, and a
computer  will be able to identify with 100% accuracy who is trying to
access;  until  then,  we  have  to live with  "knowledge security" --
identifying a person by whether or not he knows a password.

   HOWEVER  --  and this is a big however  -- a password only works if
the  fundamental  assumption  of  knowledge security  holds true. This
assumption is that


Seems  obvious? Well, answer honestly:  does this assumption hold true
on  your  system?  Is  each  password  known only to  the person who's
authorized to know it (or, perhaps, also to the system manager, though
even that is not so good)?

   There  are four major possible problems  with passwords, any one of
which  may render your entire password  system useless. I'd guess that
75%  of all HP3000 sites suffer (without realizing it!) from at least
one of the following problems:


   B. PASSWORDS THAT ARE KNOWN TO MANY PEOPLE (even if all of them are
      supposedly authorized to know them).



   Problem  A seems to be the most  obvious one; but are you sure that
all your users really DO have passwords? How do you know? When was the
last time you looked at a LISTDIR5 >LISTUSER @.@;PASS listing?

   Problem B refers to the common practice of having many people share
the  same password (e.g. 20 people who sign on as CLERK.PAYROLL). It's
an elementary principle of security (in real life just as surely as in
computers)  that the more people there are who know a secret, the more
"leaks" there will be. You might be able to get one person to keep his
password  a  secret;  get  twenty  people  to keep a  secret? -- not a

   To remain valuable, passwords must be changed periodically (perhaps
as often as once every 30 days, according to some auditing standards).
How often do you change yours?

   Finally,  embedded passwords (which  MPE's :STREAM command REQUIRES
you  to  keep  in all your job streams!)  hurt you in two ways. First,
they  make passwords easier to  figure out (one accidentally :RELEASEd
job stream or misplaced printout and the secret's out); moreover, they
make  it  virtually  impossible to change  passwords, since this would
involve  changing  hundreds  of  places  in  which  the  passwords are
embedded, some of which you might have forgotten about!

   Do  all  your  MPE users have passwords?  Is each password known to
only  one person (i.e. no shared passwords)? Are all passwords changed
frequently? Have you eliminated all your embedded passwords?

   If  you can't answer YES to all  four of these questions, then your
password security is strictly MYTHICAL.


   Most  people's  IMAGE databases are the  repositories of their most
valuable  data  --  payroll,  accounting,  customer  information, etc.
Fortunately, IMAGE has a very sophisticated security system to protect
it  -- up to 60-odd different levels of security, with individual read
and write security imposable on each item. More than enough to protect
your precious data.

   Or is it?

   To  open an IMAGE database, you  must pass the database password to
DBOPEN. This is what IMAGE security rests on. Unfortunately, it is the
PROGRAM  -- not the USER --  that must supply the password; typically,
the password ends up being embedded in the program.

   What  does this mean? For almost all IMAGE applications (except for
the few that prompt the user for the IMAGE password -- more about them
later), it means that:

   *  Database passwords are virtually NEVER changed (since this would
     force  a recompilation of all the programs and perhaps changes to
     any QUERY/QUIZ/SUPRTOOL/etc. job streams).

   *  Database passwords are known to  every programmer who's ever had
     to work on a program that accesses the database.

   *  Database  passwords  can be easily read  by anybody who has read
     access  to  source  files, to database-accessing  job streams, or
     even   (though   somewhat  less  easily)  to  the  program  files
     themselves  (which  have  the  passwords stored in  them in clear

When  was  the  last time you changed  your database passwords? If the
answered  is measured in years (as it is for most people), how can you
expect  them to be really protecting  your data? Just as MPE passwords
embedded  in  job streams are bad, so  are IMAGE passwords embedded in

   This  problem is even worse  for users of vendor-supplied packages.
The  typical application package includes both programs and databases;
the  vendor  writes  the  programs  expecting  the  databases  to have
particular  passwords,  and the user can't  change those passwords for
fear of the programs aborting.

   The  result  is  that  ALL  THE USERS OF THE  PACKAGE HAVE THE SAME
PASSWORDS.  A person who's worked with  the package in one company can
go to a different company and know the database passwords.

   Some people (a small minority) try to avoid the problem of embedded
database  passwords by prompting the  user for the database passwords.
This  seems  good  in some respects; however,  practically, it has its

   The  typical  program is designed with only  one (or perhaps a few)
IMAGE  user classes in mind. If your  program does a DBPUT with an "@"
list  to  a  dataset,  it expects to have full  write access -- if the
program's  user  class does not allow this  sort of access, bad things
will  happen.  Therefore, all users of the  program will have to share
one  user class, and thus one password. The passwords may no longer be
embedded, but they'll have to be shared among fifty or a hundred users
-- how long are they likely to remain secret?

   A  related  issue is protecting a  database against QUERY access --
presumably  your  programs  have appropriate security  and audit trail
features  built  in  to them, but QUERY can  let a user (who knows the
right  password) modify any field in any set, without leaving a trace.
Some  people put a lockword on QUERY; others run DBUTIL and do a >>SET

   The  trouble is that -- no matter  how much you protect QUERY -- an
ingenious  programmer  can  easily  write a program  that does its own
DBOPEN  and  its  own  DBPUTs,  DBDELETEs,  and  DBUPDATEs.  The >>SET
SUBSYSTEMS  won't help you, since it  only works against programs that
actually  check  the  SUBSYSTEMS flag. The only  way to really protect
your database is by keeping your IMAGE passwords secret.

   In  reality,  securing your IMAGE database is  a lot harder than it
looks.  What  you'd really like to do is  to control access not by the
PASSWORD, but by the USER ID of the user trying to access it and/or by
the PROGRAM doing the access; for instance, you might want to say "the
user  MGR.AP running AP010.PUB.AP can have access in user class 10" or
"only  MANAGER.SYS  can use QUERY.PUB.SYS to  write to this database".
For  this,  you'd  really  need  your  own  home-grown  IMAGE security
procedure  (or a third-party one like VESOFT's VEOPEN); unfortunately,
if  you don't do this, you'll have to reconcile yourself to not having
any special security on your IMAGE databases.


   Most  good  application  systems  have  "application  security"  --
security checks built in to the program that make sure that, say, only
MGR.PAYROLL  can  write checks or only MARY.AP  or JACK.AP can add new
vendors  to the accounts payable system. Application security is vital
because  the  computer  has  no  way of  differentiating, say, writing
checks  and adding employees (the two operations might actually update
exactly  the  same datasets); only you can  tell who should be able to
use one option of your system and who should be able to use another.

   However,  a security system is only  as strong as its weakest link.
Say  that  your  accounting systems calls the  WHO intrinsic, gets the
user  id,  and  uses it to determine which  features of the system the
person  is entitled to use; how do you know that the user id is really
correct?  If  you only allow MARY.AP to add  a vendor, how do you know
that it's really Mary who's signing on that way?

   This  kind  of  application  security  system  relies on  the logon
security  system -- it relies on "user id integrity" to guarantee that
a  user id really corresponds to  a particular person. Only good logon
security,  with private, frequently changed passwords can provide this
sort of guarantee.

   Note  also  that  if  you  use  a  user's logon id  for audit trail
purposes,  you MUST make sure that  the logon id really corresponds to
one  and  only  one  person.  What's  the use of  seeing that a "ghost
employee"  was  added by PETE.AP when  anybody might have known Pete's
password?  In  fact, if you ever have  to prosecute an embezzler using
audit  trails, those audit trails will  be worthless if you don't keep
your  logon  security airtight -- it would  be very easy to claim that
somebody else simply logged on under his user id.


   There's  been a lot of  publicity recently about dial-in "hackers",
often  high  school students who break  into people's computer systems
and wreak havoc. However, it's been estimated that 90% of all security
violations  are "inside jobs" (though, of  course, it's hard to get an
exact  figure).  In  our  own  experience, we've seen  no break-ins by
strangers; instead, we've encountered (among other things):

   *  Several  time  bombs  planted  in  a  program  by  a disgruntled

   *  In three separate cases, malicious  encryption of source code by
     employees  on  their  way  out,  resulting  in  perhaps  tens  of
     thousands of dollars of lost software investment.

   * Embezzling of over $1 million by the chief financial officer of a
     small  company. (This same person, when earlier approached by his
     DP  people  with a budget request  for security software, replied
     that "we don't need any more security".)

   It's  very  natural  to fear the stranger  more than the people you
work  with every day; unfortunately, it's  those inside people who are
most  likely  to  sabotage  or  steal  (or  even  -- with  the best of
intentions  --  do something very stupid  using capabilities that they
should  never  have  been  granted). Protecting the  system against an
outsider  is  easy  (just  make sure you have  plenty of passwords and
perhaps  even special passwords on  dial-up lines); protecting against
an insider is much harder.

   Once a person is already authorized to use the system, what can you
do to protect yourself against him?

   *  You  can make sure that he doesn't  do more than he was actually
     authorized  for. The best way of  assuring this is by DENYING HIM
     ACCESS  TO THE COMMAND INTERPRETER -- when he signs on, he should
     immediately (with a logon UDC) be dropped into the application he
     needs  to use or, if there's more than one such application, into
     a  logon menu. After he's done, the logon UDC should sign him off
     --  all this time, he'd only be able to do what he was explicitly
     authorized to do.

     In general, remember that once a person gets access to the CI, to
     the  editor, to QUERY, to the  compilers, etc., he's a lot harder
     to contain than when he only has application access. At a typical
     site,  there  are  many  possible  holes  in  security  (unneeded
     capabilities,  :RELEASEd files, etc.) that  can be exploited by a
     person with CI access.

   *  You  can  keep  track of everything he  does by using MPE and/or
     IMAGE  logging.  Better yet, let all  your users know that you're
     keeping  track of them. This way,  they'll know that even if they
     manage  to  do something nasty, there'll  be thorough and damning
     evidence of what they were up to.

     IMAGE  logging is particularly good for this -- it can keep track
     of  every  change  made  to the database  (through an application
     program,  through QUERY, or what  have you). Robelle Consulting's
     DBAUDIT  can  analyze  these  log files for you  and let you know
     exactly who did what, where, and when.


   The  INTEREX Contributed Library has  several programs that let you
encrypt files -- you supply a key, and the file gets encrypted; nobody
will  be  able  to decrypt it unless they  can supply exactly the same

   At  first  glance, this seems to be  a very useful security tool --
protect  all  your sensitive files against  EVERYBODY (even the system
manager).  In reality, this turns out to be a possible "Trojan horse"
that,  in  our experience  alone,  has  cost  our  customers  tens of
thousands of dollars in lost software.

   In  the  span of less than a  year, we encountered three completely
different  sites  with  the  same  story. One day  the company fires a
programmer  (or  perhaps  even  the DP Manager) and  the next day they
discover  that many of their source files have been encrypted! Without
the  fired  person (with whom they naturally  want nothing to do), the
sources are completely useless.

   In  all  three  cases the backups were no  help; the files may have
been  encrypted  long before, or perhaps  even decrypted every morning
and  re-encrypted  every  night  before  the backup! In  one case, the
company managed to recover its files (after paying me some non-trivial
consulting  fees  to  crack  the encryption algorithm!);  in the other
cases, the companies had to choose between rewriting the programs from
scratch  (or redeveloping them from a very old version that they still
had  un-encrypted) or taking expensive  (and possibly fruitless) legal
action against the saboteur.

   The  ironic  thing  is  that  if the saboteur  were accused (either
civilly or criminally), he could claim that he did nothing wrong -- in
fact,  he  could  say  that  he was actually  trying to IMPROVE system
security   by   encrypting   the   sources  to  protect  them  against
unauthorized  readers. My legal sources tell  me that unless you could
prove  malice (which would be  very difficult), a criminal prosecution
would be almost certainly impossible.

   The  first  moral  to  draw  from  this is NOT  TO ALLOW ENCRYPTION
PROGRAMS  ON  YOUR  SYSTEM.  This is especially  true because very few
sites  actually have data so sensitive  that it must be encrypted. MPE
file  security  can  do a pretty good  job against anybody who doesn't
have  SM capability; and usually, the  system manager should really be
allowed  to read everything in the  system (if only to protect against
people leaving the company and leaving their files encrypted).

   The  second,  and  more interesting, moral is  THINGS ARE OFTEN NOT
WHAT  THEY  SEEM  (actually,  that's the moral  of very many stories).
Something  that at first glance appears to be a powerful security tool
can  actually  prove  to  be  a great danger to  your system. Let's be
careful out there.


   It's a sad irony of the world that, just as the people who you most
count  on to be smart end up  disappointing you, so the people who you
count  on to be dumb end up being much smarter than you'd like them to
be.  The guy who put several time bombs into his programs before being
fired  --  and  brought  a coast-to-coast application  system down for
several  days -- was actually fired for incompetence. He was certainly
an incompetent programmer, but you don't have to be really smart to be
really nasty.

   It  is  true  that  no  security system is  perfect; somebody smart
enough and experienced enough can probably break into any computer. We
don't expect our computers to be KGB-proof.

   However,  it doesn't take that much smarts for a user to come up to
unattended  terminal and try a :LISTACCT just  to see if it works; for
an  operator  to  look  at  the  printout of a job  stream and see the
MANAGER.SYS  passwords  embedded inside it; or  even for an accountant
(who  is, after all trained in juggling  numbers) to see a gaping hole
in the system security and find temptation too hard to resist.

   Though  there's  a  limit  to how much you  ought to invest in your
system  security  --  going  from  99%  security  to 99.5%  may not be
cost-effective  --  there  are a lot of  inexpensive things you can do
protect your system from very common security threats.


   The  most pernicious myth about system security is also the hardest
to deal with, because it's psychological rather than technical. Nobody
wants  to  think  of themselves as a  possible victim; nobody wants to
think  of  their  co-workers  as  possible thieves  or saboteurs. Many
people  don't realize the importance of  security until it's too late,
until  tens  or  hundreds  of thousands of dollars  have been spent on
recovery  when a couple of thousand  dollars worth of prevention could
have  avoided  the  whole  mess. All because of  the myth of "it can't
happen here".

   We've  seen  too  many  sites fall under the  spell of this myth --
don't let it happen to you!

Go to Adager's index of technical papers