INTERNATIONAL CONSERVATION AND THE WORLD WIDE
WEB
By Donna FitzRoy Hardy, Ph.D.
International Zoo News Vol. 43/8
(No.273):562-570.
[This paper appears with the permission of
Nicholas Gould, Editor of International Zoo
News.]
Introduction
The international zoo community is gradually coming to regard the
personal computer (PC) as a communication tool, although we might be well
into the next century before computers will compete with the telephone,
facsimile machine or the post. But electroniccommunications technology is evolving very
fast, and since my article on computer-mediated communication
appeared in International Zoo Yearbook(Hardy, 1994), the ease with which the user can access computer
networks has undergone a revolutionary change. While other applications
of the Internet (especially e-mail and Internet Relay Chat) continue to
gain popularity, the recent increase in interest in the Internet is
attributed to the use of World Wide Web (also referred to as WWW, W3, or
simply, 'the Web'). When the above article was written in 1993, the
World Wide Web was just another information server that the user
could access with a Telnet code, and most of the sites accessible through
the Web at that time were institutions and centers for high-energy
physics, mathematics or computer science. Today's popularity of
the Web is due to features that distinguish it from all other Internet
applications that preceded it: hypertext 1 orientation
and ability to
support hypermedia (graphics, color, sound, motion, etc.).
Hypertext has captured the imagination and enthusiasm of the Internet
community and nearly all Web documents include hypertext links to other
documents. These 'hyperlinks' automatically locate information in
computers (called HTTP or Web 'servers') anywhere in the world (Wiggins,
1995).
History of the World Wide Web
While the World Wide Web and the Internet are often thought to be
synonymous, the Internet is actually only the physical medium used to
transport electronic data. The Web is a collection of computer
protocols 2 and standards that computers use to access
that data.
The three computer standards that define the Web are: Uniform Resources
Locators (URLs), Hypertext Transfer Protocol (HTTP), and Hypertext
Markup Languages (HTML) (Richard, 1995). URLs are the standard
means used on the Web in locating Internet documents, and HTTP is the
primary computer protocol used to retrieve information via the Web.
(Fuller definitions of both terms, and of HTML, are given below.)
The interactivity of this technology has prompted some to
dub the World Wide Web as the 'Fourth Media', following print, radio and
television as a means of mass-market communication. The First Media
originated in about 1450 with the invention of the movable type
printing press by Johannes Guttenberg, and print remained the only means
of mass communication for more than 450 years. The first radio
communication signals were sent in 1895 by Gugliolmo Marconi,
although radio technology had to wait until the development of the audion
tube by De Forest in 1906 before it could move from dot-dash radio
telegraphy to full sound modulation. De Forest's invention set the
stage for the Second Media of mass communications, and the first
experimental broadcast of the voices of Enrico Caruso and Emmy Dustin
took place in 1910. Formal radio broadcasting began on 2 November
1920, with the inauguration of a daily schedule of programs by
KDKA-Pittsburgh. The Third Media began with RCA's experimental television
broadcast of a Felix the Cat cartoon in 1936, and television use
proliferated during the late 1940s. The World Wide Web was made available
to Internet users in 1989. But as the Fourth Media, the Web differs
considerably from the other three media of mass communication. Whereas
print media is dependent upon an editor and the space afforded within a
publication, and television and radio are controlled by programmers and
the limits of time (contents being delivered in segments), the World Wide
Web is not bound by space or time.
Although the original idea that computer systems could allow their users
to follow non-linear 3 paths through various documents
dates back
to the 1940s, the first practical application of a hypertext system was
not released until 1987 when the HyperCard package for the Macintosh
computer was popularized (Wiggins, 1995). About the same time, a
hypertext project was proposed at CERN, the European Particle
Physics Institutes in Geneva, Switzerland. Their design document
describing how computer documents could be 'interwoven' or blended
together contrasted with the hierarchical 4 model used
by the
Internet Gopher, announced by the University of Minnesota the next year.
Soon after its origin in 1989, the World Wide Web gained great popularity
among users of the Internet.
Today's widespread use of the World Wide Web is a direct result of the
development of functional computer programs called 'browsers.' The first
browser, DOSLynx, was developed in early 1992. While it is just a
text-only program, it affords access to the Web for people with Pcs not
capable of running the Windows program. DOSLynx is available via FTP from
the University of Kansas: ftp2.cc.ukans.edu, login as anonymous,
enter your e-mail address as the password, then go to the directory for
README.TXT. (FTP or File Transfer Protocol is the Internet
protocol for transferring files between computers.) By the fall of 1993,
the first graphical browser, Mosaic, became available. Today, the most
widely used graphical Web browser is Netscape Navigator, for which
excellent guides are available (e.g., Minatel, 1995). Graphical browser
programs afford easy access to multimedia graphics, audio and video
files from anywhere on the Internet, and new applications utilizing
the interactivity of the Internet are rapidly being developed.
Uniform Resource Locators
URLs are the standard means used on the Web to locate Internet documents.
They provide a simple addressing scheme that unifies a wide variety of
dissimilar protocols and can specify FTP file retrieval, locate Usenet
Newsgroups and gopher menus, define e-mail addresses, and identify HTTP
documents. The typical format of a URL is
protocol://server-name:port/path. For retrieving documents on the World
Wide Web, the protocol in this format is http. Entering a URL beginning
with http:// tells a browser program to look for a computer file on the
World Wide Web. The rest of the URL tells the browser exactly where the
file is located: the domain (name of the Web server), where in that
computer the file is stored, and the name of the file to be
retrieved.
One gains access to the World Wide Web by using a PC with a modem and a
computer communications program to gain access to the Internet by
connecting one's PC (called the 'client') to a server at an online service
(e.g., America OnLine, CompuServe, Prodigy) or an Internet service
provider (e.g., Netcom, PSINet, UUNet). The user needs to establish a
connection that can support a TCP/IP (Transmission Control
Protocol/Internet Protocol), the protocol with which computers exchange
information on the Internet. This is usually done with either a SLIP
(Serial Line Internet Protocol) or PPP (Point-to-Point Protocol)
connection to the server. The user will also have to install networking
software in the PC, such as Trumpet Winsock or a commercial product such
as Internet in a Box (Weiss, 1996). It is highly recommended that the
user become familiar with this technology by first reading an introductory
book about the Internet, such as the one by Krol (1994) or by Hahn and
Stout (1994).] If one's PC is operating in a Windows environment or is a
Macintosh, a graphical Web browser program like Netscape Navigator
can be used. Once connected to the online service or Internet
service provider, the user can gain access to files located in that
particular server or in another server anywhere on the Internet, although
the actual speed of this access depends upon the speed of the modem and
the type of connection to the Web server. A person in Great Britain entering
the URL http://www.selu.com/~bio/cauz/ will be retrieving files
from a Web server in the United States - in this case, a computer in
Seattle, Washington. This is the URL for the Web Site of the Consortium
of Aquariums, Universities and Zoos (C.A.U.Z.).
Hypertext Transfer Protocol and Hypertext Markup Language
HTTP is the primary computer protocol used to distribute information
across the Internet. It specifies a simple electronic transaction to
deliver requested information from a server to a client, and its
simplicity permits fast response times. Basically, the transaction
involves: (1) the client establishing a connection to the server, (2) the
client issuing a request to the server specifying a particular document to
be retrieved, (3) the server sending a response containing the text of
the document to be retrieved, if it is available, and (4) either the
client or the server disconnecting after each request. Since a new
connection to the HTTP server must be made each time, this is one reason
why it takes so long to load graphics: for each graphic, a separate
connection must be established. Newer Web browser programs like Netscape
Navigator open multiple connections and receive documents in
parallel, so that the user can be reading the text while the graphics are
still being received by the client.
Hypertext Markup Language (HTML) is an important innovation associated
with the World Wide Web. It is the 'hidden code' - invisible to the
person viewing the Web document - that determines how the document will
appear to the person who has retrieved it. HTML also provides hypertext
links within a document or between documents so that a viewer can easily
move from one document to another by clicking a button on a computer mouse
when the screen's pointer is on a highlighted part of the text. Many
items at the C.A.U.Z. Web Site have been marked in HTML so that
they are linked to other documents. While some of these links are files
in the same Web server in Seattle, many hypertext links are provided to
documents in other servers - many of which are in other cities.
For example, the user may read the description of the important Internet
resource called ZooNet, and then click on the word ZooNet to 'visit'
that Web Site without having to enter its URL. After deciding that
ZooNet is a site to visit repeatedly, the user can use the browser program
to 'bookmark' this link for direct access later.
Hypertext links form the threads of information that are the structure of
the World Wide Web, and Web Sites like the one provided by C.A.U.Z. that
offer a broad array of useful hypertext links can serve as
convenient 'launching pads' into the Web. (Many of us who are
interested in conservation have changed the 'HOME' location of our browser
program from its default location - e.g., Netscape Corporation - to the
C.A.U.Z. Web Site because it provides easy access into the World
Wide Web and its vast resources.) As increasing numbers of people begin
to use graphical Web browsers to find documents on the Internet, many
of them become interested in creating their own Web documents.
Excellent texts, such as The HTML Sourcebook (Graham, 1995), are useful in
creating 'Home Pages', and there are many HTML authoring tools available
through the Internet (Sigler, 1996).
Since developing Home Pages is relatively easy, it is possible to create
Web documents without fully understanding how hypertext markup language
actually works. While the hidden HTML code tells a browser program how to
display a document on the Web, exactly how that document actually appears
on a computer monitor (e.g., appearance of the text, where the graphics
appear on the screen, the color of the background, etc.) depends upon how
the browser program interprets that HTML code. Consequently, the
same Web document may have a very different look when viewed with
Netscape Navigator, Mosaic, or a Web browser provided by a commercial
online service provider. In addition, various preference settings in
one's browser program allow the user to 'customize' the appearance
of a HTML-coded document (e.g., type and size of the fonts) to meet one's
own tastes. Designers of Web documents should remember that they cannot
always be sure how the document that they have created actually appears to
other people.
Searching the Web for Information
Since information on the Internet is not well-organized, the task of
locating specific documents among the millions of files that reside on
thousands of Web servers can be quite daunting. Catalogs like
The Internet Yellow Pages (Hahn and Stout, 1995) give some insight
as to the vastness of this information resource. Computer programs called
'search engines' are a recent and powerful way of locating
information on the Internet. The first search engine, Yahoo!
[http://www.yahoo.com.], was developed in 1994 at Stanford
University. This program utilizes human indexers to survey and categorize
resources on the Web. Initially this system worked well, when there were
a few thousand Web sites: the indexers sorted documents by their title
pages. But by the fall of 1995, there were about 200,000 Web sites and
the numbers seem to be growing exponentially.
Since there is no limit to the total number of sites that are accessible
on the Web, it has become necessary to rely more heavily on electronic
searching. Automated Web search engines are sometimes referred to as
'robots', 'worms', or 'spiders', and they include such fancifully named
creations as WebCrawler [http://www.webcrawler.com] and Lycos
[http://www.lycos.com] (Wiggins, 1995). Since each search engine
is a computer program that uses a unique 'strategy' to search the
Internet, the results of various search engines may or may not contain the
same documents. Thus more than one search engine must be used to insure
that the searches are comprehensive. Two recent experiments harness
existing search engines in parallel to give users the combined results of
their Internet searches: MetaCrawler [http://www.metacrawler.com]
and UseIt! [http://www.he.net/~kamus/useen.htm]. Yahoo!, Lycos,
WebCrawler, MetaCrawler, AltaVista, Excite, and other useful search
engines can be found at the C.A.U.Z. Web Site.
There is no one ultimate search tool for the World Wide Web. Various
search engines use different search techniques and offer different ways of
'viewing' the Web. A search engine sends out an array of queries on the
Web, starting with a few servers whose files it searches either
completely, word by word, or in summary fashion for titles, key words and
abstracts, then makes a note of links in these files to other
sites. Those links lead to further searches, which expand outward
in a widening circle, much like a chain letter. And - fortunately for the
person who has entered a key word or phrase into a search engine - all
this electronic activity takes place automatically!
Two 'strategies' used by search engines lead to 'depth-first' or
'breadth-first' searches. And because each search engine involves a
different computer program, each may lead to different results: a
search for 'wildlife conservation' using one search engine might locate
Internet documents that differ from those found by second search engine.
Search engines are clearly very valuable tools, although conducting
what one might consider to be a comprehensive search of the Internet would
involve using a variety of different search engines. This process can be
quite time-consuming, especially since not all of the documents found by a
search engine are equally relevant. Consequently, many people prefer to
begin with the hypertext links that are provided by established Web Sites.
The C.A.U.Z. Web Site, for example, provides links to more than 700
documents in a variety of topics: these links are a convenient and
efficient way to locate important information resources in the World Wide
Web. When the user finds the name and description of a document of
interest (e.g., one dealing with wildlife conservation), a click of the
mouse automatically retrieves that document, which usually provides
hypertext links to other relevant documents on the same or related topics.
The result of such 'non-linear searching' using these hypertext links to
relevant information on the Internet can be very rewarding and
sometimes surprising.
The C.A.U.Z. Web Site
The Consortium of Aquariums, Universities and Zoos was founded in 1985 for
the purpose of facilitating communication and collaboration between
university scientists and educators and their counterparts at zoos and
aquariums around the world (Hardy, 1992). Information submitted to
C.A.U.Z. has been available through annual printed directories since 1987
and has been widely used by many people. For example, the database was
analyzed in 1993 in an effort to understand the kind of research
activities that take place in zoos and aquariums (Hardy, 1996). The
C.A.U.Z. Web Site was established in August 1995, and with the development
of the C.A.U.Z. search tool by Tim Knight (C.A.U.Z. Webmaster), a user can
conduct online searches of the information submitted by hundreds of
scientists and educators who are dedicated to wildlife conservation. Its
simple menu and numerous hypertext links allow the user to learn of the
interests and current projects (as well as titles, institutions,
addresses, and phone and fax numbers) of C.A.U.Z. Network members. There
are also listings of people who have a professional focus in a wide range
of fields - from Animal Behavior and Conservation Biology to Restoration
Ecology and Wildlife Management - as well as listings of people with
interests in specific groups of animals or who are conducting field
studies in many countries. The C.A.U.Z. database can be searched with key
words: names of animals, institutions, cities or countries, or the
specialties (e.g., 'marine mammal ecology', 'population dynamics',
'reproductive biology') of C.A.U.Z. Network members. The user can also
search for a name that is listed alphabetically under a particular
professional focus or an animal group. And if a C.A.U.Z. Network member
has an e-mail address, the user can send an e-mail message to that member
directly from the C.A.U.Z. Web Site.
Applications of the Web for International Conservation
In addition to the hypertext links, its hypermedia capability makes the
World Wide Web very special. It can retrieve color images or graphics,
sound and motion pictures as well as text. The recent proliferation of
Home Pages by zoos and aquariums can probably be attributed to its
application to marketing and public relations. Indeed, an examination of
the very first efforts by zoos and aquariums to establish a 'Web presence'
will reveal that most of their information is of the type found in
advertising brochures. (In fact, some of the information the user finds
on the Web about zoos and aquariums did not even originate with the
institutions involved: the Home Page for one major zoo was established,
apparently without their knowledge, by a local travel agent hoping to
attract customers!) Some zoological institutions are making pioneering
efforts to provide educational programs via the Web, and some institutions
now provide excellent educational information. Some of these Home
Pages offer glimpses of future applications of hypermedia by adding short
motion pictures and sound to a wide array of colorful animal photographs.
The AZA Home Page [http://www.aza.org/] and ZooNet
[http://www.mindspring.com/~zoonet/] provide numerous links into
many of these Home Pages.
While many current applications of the World Wide Web are indeed dazzling
and highly entertaining, this technology cannot serve the needs of the
international conservation community until computer-mediated communication
is widely accepted. The potential use of computers in rapid and
efficient distribution of information is beginning to be met by the
increasing use of e-mail. But in reality, using e-mail is just another
way of sending a letter to another person. New users of e-mail who have
been relying on the post and fax are usually impressed by its immediacy:
it is truly wonderful to be able to send a message internationally and get
a direct reply within hours (or minutes) rather than days. But while e-
mail is clearly more efficient and less expensive than facsimile or
post, it is probably the least imaginative use of modern electronic
communications technology.
The World Wide Web allows a user to 'interact' with the information
provided by the server. For example, the data available through the
C.A.U.Z. Web Site comes from its large international database.
This information - as well as information provided by the IUCN
(including the 1994 Red List of Threatened Animals) and by The
International Species Inventory System (including ISIS Abstracts) - is
available (and searchable) through the World Wide Web. For example, using
the IUCN Site, the user can find the name of a species that is endangered
in a particular country, then quickly check the ISIS Abstracts
to learn in which zoos that species is exhibited. The user can
then access the C.A.U.Z. Network to learn the names of people who share a
professional interest in that species and are perhaps engaged in captive
propagation of that species in captivity or in fieldwork in that
country. The search tool provided at the C.A.U.Z. Web Site allows the
user to enter the name of the animal (or country) to find people to
contact. A great deal of information is provided for each person in the
C.A.U.Z. Network, and this information is available anywhere on the
Internet at any time.
Summary
The increasing use of the Internet has important implications for the
international zoo community. First and foremost, computer-mediated
communication is facilitating the sharing of information to a degree
that has never been possible before. The potential for use of computers
for rapid and efficient distribution of the kind of information needed by
scientists engaged in conservation projects is starting to be met by the
increasing use of e-mail, and in the near future, other applications of
this technology will be found to be absolutely essential. Although the
interactive capability of the Internet has not yet been fully explored,
some of the more innovative means of communicating are now
beginning to evolve. For example, the user can now access live images
from various Internet 'cams' that have been placed in various locations
all over the world, and since some of these video cameras can be
controlled remotely, the viewer has some control over what images are
being transmitted over the Internet. And the recent commercial
availability of relatively inexpensive hardware and software now makes it
possible for people to utilize visual and voice (or voice alone)
communication via the Internet in 'real time' - rather like a two-way
Internet 'video-phone' (or Internet phone). And other innovative Internet
applications are being rapidly developed. Only the future will tell how
these extraordinary advances in communications technology will be used by
the worldwide conservation community.
References
Graham, I.S. (1995): The HTML Sourcebook. John Wiley & Sons, New
York.
Hahn, H., and Stout, R. (1994): The Internet Complete Reference.
Osborne McGraw-Hill, Berkeley, California.
Hahn, H., and Stout, R. (1995): The Internet Yellow Pages (2nd
ed.). Osborne McGraw-Hill, Berkeley, California.
Hardy, D.F. (1992): The Consortium of Aquarium, Universities and
Zoos. International Zoo News 39:8, pp. 17-20.
Hardy, D.F. (1994): The international zoo community and
computer-mediated communication. International Zoo Yearbook 33, pp.
283-293.
Hardy, D.F. (1996): Current research activities in zoos. Wild
Mammals In Captivity: Principles and Techniques (eds. Devra G. Kleiman
et al.), University of Chicago Press, Chicago.
Krol, E. (1994): The Whole Internet: User's Guide and Catalog (2nd
ed.). O'Reilly and Associates, Sebastopol, California.
Minatel, J. (1995): Easy World Wide Web with Netscape. Que
Corporation, Indianapolis, Indiana.
Richard, E. (1995): Anatomy of the World-Wide Web. Internet World
6:4, pp. 28-30.
Sigler, Douglas. (1996): HTML toolbox. Internet World 7:4, pp.
51-52.
Weiss, Aaron. (1996): Personal connections. Internet World 7:3, pp.
86-88.
Wiggins, R.W. (1995): Webolution: the evolution of the
revolutionary World-Wide Web. Internet World 6(4), pp.
32-38.
[Note: the above articles in Internet World, as well as full-text
versions of back issues of this magazine, are available on the World Wide
Web: http://pubs.iworld.com/iw-online/]
1 Hypertext: a means by which one can retrieve other
electronic
documents by clicking (with a computer mouse) on highlighted words or
graphics in a hypertext document.
2 Protocol: an agreed-upon convention for intercomputer
communication.
3 A non-linear path differs from a linear (direct or
straight)
route in that it does not follow a single direction. In most documents,
one reads the various sections in an uninterrupted sequence. With a
hypertext document, one can click on a highlighted part of the test
(or on graphics) and can go to another part of the same document or access
entirely new documents - perhaps on other subjects - thus deviated
from a sequential presentation of information. (One sometimes
adopts a non-linear searching strategy when one looks up the definition of
a word in a dictionary. Occasionally one will come across a word in this
definition that is unfamiliar, and so be forced to look up that
word too. The definition of this second word may also contain a word that
must be looked up, and so forth. The path to the definition of the
original word thus become more complex, because the route to understanding
of the word is circuitous.)
4 In a document that is arranged hierarchically,
subjects are
categorized in order or ranks, each subordinate to the one above it. (The
classification of animals in Phyla, Classes, Orders and Families is an
example of a hierarchical arrangement.)
Donna FitzRoy Hardy, Ph.D., Network Coordinator, Consortium of
Aquariums, Universities and Zoos,
Department of Psychology, California State University, Northridge,
California 91330, U.S.A.