MIS 4477
Network and Security Infrastructure
JAKE MESSINGER (jake@uh.edu)

Fitzgerald/Dennis Chapter 2: Application Layer (class 3)


Author's PowerPoint Presentation
Student Companion Site
Networking Labs


INTRODUCTION

In the early days of information systems, most systems were design to improve the productivity of entire departments or organizations.
In the 1980s, the introduction of microcomputers enabled the wide-spread development and use of information systems to support individuals.
By the 1990s, the widespread availability of data communications and networking had begun to change our vision of the computer.

Three sets of applications are the future of information technology:


Application Architecture

Host based architecture:  Main computer does ALL the work
Client based: Clients do MOST of the work
Client-server: work is shared between the 2 depending on what it is

Work done by an application PROGRAM:
1. Data storage - read and write data to hard drive
2. Data access logic - process to ACCESS the data (specific manner)
3. Application logic - application dependent - more complex
4. Presentation logic - interaction with user

Host-Based:
First types of networks. Usually mainframes with dumb terminals connected
to  them,  Example, IBM 370, Our very own IBM ES-9000. Client ONLY sent
keystrokes and receives screens. Makes for secure and fully controlled
network.
Problems - host must do everything, can get slow, must be all powerful, costs
to  upgrade servers are much higher than to upgrade clients. Intelligent
terminals and PC emulation was invented to HELP with some of the
bottleneck, along with Pseudo conversations programming methods, e.g.
CICS.

Client Based:
Microcomputers in a LAN.
Today virtually ALL pc's in the corporate world are networked. Mainframes now
comprise  only 10% of the world's processing power.
 

Why did this happen?

Low cost, popular, EASY TO USE applications, Word Perfect, Windows,
Lotus.
PC  software was easy to write and maintain.
Cost of microchips has dropped exponentially and risen in speed/power at
the  same exponential level.
Application software responsible for application logic, presentation    logic and
data access logic. The server ONLY handled the data storage logic, e.g.
Novell Netware.
Worked pretty well - server didn't have to do all the work, was cheaper.
Single threaded software.

Why are all networked? Why are clients becoming “thinner”? Apps live on central servers OR in the “cloud” and are “rented”, i.e. Office 365.

Problems - circuits maxed out. Arcnet on 2.5 Mbit, Ethernet on 10Mbit and
susceptible to "collisions." (too much traffic).
Example: User wants to display all records of a 1 million record database,
and  print JUST the accounts with area code 713. ALL the records must be
sent to the  client to examine them to get perhaps 60 or 70. VERY slow.
Some applications  that took a host only a few minutes, now took hours
regardless of the speed of  the computers. The CIRCUIT was the bottleneck.

How do we fix that???

Client-Server Architecture:

It got to the point to where the servers were very powerful and VERY good at
accessing data quickly thru the use of caching, read-ahead methods, etc....
BUT for the most part, the server's processor was ASLEEP! In our case at
AMS, our  server's processor was about 5% busy. The things that slowed our
network down  were the speed of the drive, the amount of ram on the server
but mostly it was  the slow arcnet circuitry we were using.  Datafiles get
bigger, apps get bigger, circuits harware takes longer to upgrade to faster
circuits due to technology,  cost, complexity.

How Client-Server architecture solved the problem.

Balance the processing by moving the DATA ACCESS logic to the server. So
in our  previous problem, the client sends the request to the server which is
running  the server version of the same software to process the request for
just the 713  area coded records. ONLY the records matching are sent thru
the circuit, leaving  the circuit relatively unclogged. ONLY the request and the
actual data needed by  the client is sent.

Cost benefits:
Scalability - easier to increase storage, processing speed, circuits. If server
gets  overloaded, put in a bigger processor or add another server. If client
gets  slow, upgrade it.

Cross-vendor compatibility - you can use a Dec Alpha for your server and run
the  Alpha version of the software and a PC to run the client version. Not tied
to a  specific vendor OR processor type because the software speaks the
same  PROTOCOLS, e.g. Apples running an SQL client application can
access data on a  Compaq running an SQL server application, just as long as
the appliations are  COMPLIANT with each other and follow the standards
set forth by the various  committies, like ITU, IEEE, ANSI, etc..

Better use of hardware - You can now use that 95% wasted processing
power in  your server to do things like database querries, sorts, etc...

Hardware matched to software/application - You can use the hardwar that is
best  suited for the task. E.g. use systems that access data quickly and use
lots of  ram in a server, use multimedia type processors in the clients to draw
the  screen quickly.

Network reliablility - No 1 computer is the MAIN computer. You can have
redundancy. If one server goes down, you loose only  part of your system,
not the whole thing as in the case of host-based processing.

PROBLEMS:

COMPLEXITY: Software manufacturers have to write a server side of their
software  AND a client side. More difficult to design, program, test and
maintain.
Programmers must CHANGE their way of thinking to be able to program
client-server software.
Updates are more complicated and don't work that well. For example, lets
say a new version of server software comes out to make the data access
faster, but it requires new client software. You have to make sure that EVERY
CLIENT gets the  new program updates as well for the CLIENT version. It
means the system  administrator has to run around to each PC and update
the software, OR send  email or other communication to let the users know to
upgrade.

How to fix that???

Middleware - Software that sits between the client and server and acts as a
gateway - standardizes communication. Examples are DCE and CORBA (see
book page  75). More common on PC's is ODBC, Open Database
Connectivity, standard for data access.

Living software - Have the server automatically upgrade the client's software
the next time the client "logs in" to the server. This sort of software is  harder
to write and requires a lot of foreplanning, BUT it is the easiest and  MOST
TRANSPARENT to the user. Examples of this sort of software are Fedex's
ON-LINE shipping software, AlphaWorld virtual world software, Netscape is
getting  there...

Summary: client-server has become more the trend. Its cheaper and
hardware wise simpler; However it introduces software complexity and places
a larger burden on the software developers as well as the End-users to learn,
use and ugprade it.
 

TIERED architecture

2 Tiered Architecture: most common. Server and client talk directly to each
other.

3 Tiered: 3 processors. 1 responsible for the presentation logic (client), 1 in
the middle responsible for application logic (server) and a server on the other
end responsible for data-acess. A good example of this is Fedex's on-line
tracking system. Your client PC accesses via a web browser a server running
the server  part of a web browser. You make a query to the server. It then
formulates the  query in the required format to the main Fedex server, which
then sends the  shipping data BACK to the web server. It then turns the data
into a web page and sends it BACK to the client. You can call the web server
a front-end processor, because its at the front of this chain.

Multi- tiered: similar to 3 tiered except even more tasks are broken down. For
example, in the Fedex example, there is not really just 1 web server. There
are  dozens. So you are actually talking to a web server node connected to
another  processor that accepts web-type traffic and IT converts the queries
into a form that the main server can understand.

THIN versus FAT clients:

Thin: Places very little application requirement on the client. Easier to
maintain, less that the server has to worry about, like what version of client
software they are running. Example is web based email where all the email
processing is done on the server and only the screens and data entered at
the client is sent back and forth in a formatted textbox.

Fat: Client software does MORE of the application processing. Email
example:
server requires client to be able to process the raw data sent to the client as
mail. The client must know how to receive, present the mail on the screen to
the user, format a response and send it back.


Internet Application software examples:

Electronic mail, Telnet, and FTP have been around as long as the Internet itself.  The Web, however, was developed in the early 1990s.
These applications were originally developed for the Internet, but many organizations are using them on their LANs or for private use on the Internet.



HOW THE WEB WORKS

One of the fastest growing Internet software applications is the World Wide Web.
The Web was first conceived in 1989 by Tim Berners-Lee at the European Laboratory for Particle Physical (CERN) in Geneva.
CERN’s first Web browser was written in 1990, but it was 1991 before it was available on the Internet for other organizations to use.

Each client computer needs an applications layer software package called a Web browser.
Each server on the network needs an application layer software package called a Web Server.
In order to get a page from the Web, the user must first type the Internet Uniform Resource Locator (URL) for the page, or click on a link that provides the URL.

In order for the request from the Web browser to be understood by the Web server, they must use the same standard protocol.The standard protocol for communication between a Web browser and a Web server is Hypertext Transfer Protocol (HTTP).

An HTTP request from a Web browser to a Web server has three parts.  Only the first part is required, the other two are optional.

A Request from a Web browser to a Web server  using the HTTP standard:

GET http://tcbworks.cba.uga.edu/~adennis/res.htm HTTP/1.1
Date: Mon 03 Aug 1998 17:35:46 GMT
User-Agent: Mozilla/3.0
From: adennis@uga.cc.uga.edu
Referer: http://tcbworks.cba.uga.edu/~adennis/home.htm

Important note. The SERVER knows who you are and some information about your computer because your web browser reports it to the server.

The format of an HTTP response from the server to the browser is very similar to the browser request.

Only the last part is required, the other two are optional.

HTTP/1.1      200           OK
Date: Mon 03 Aug 1998 17:35:46 GMT
Server: NCSA/1.3
Location: http:// tcbworks.cba.uga.edu/~adennis/res.htm
Content-type: text/html

<html>
<head>
<title>Business Data Communications and Networking Web Resources </title>
</head>
<body>
<H2>Resources on the Web </H2>
<P>This section contains links to other resources on the WEB that pertain to
the field of data communications and networking </P>
 

</body>
</html>

Hypertext Markup Language (HTML), seen above, the most commonly used Web format, was  developed by CERN at the same time as the first web browser. HTML is fairly easy to learn, so you can develop your own Web pages.

E-Mail

Electronic mail (e-mail) was the original  intent of the Internet and is still among the most heavily used today.

E-mail has several advantages over regular mail:

Each client computer in the LAN runs an application layer software package called a user agent, which formats the message into two parts:

Host based email is 2 tiered:

2 tiers allows senders and receivers to be non interactive, i.e. receiver does not have to be online at the time the sender sends his/her email.

2 most common mail protocols:

3 Tiered Email: Web based, i.e. Hotmail, UH Webmail:

LISTSERV DISCUSSION GROUPS

FTP: File Transfer Protocol

Telnet

INSTANT MESSAGING (IM)

Video Conferencing

Real-time transmission of voice, video and data

Proprietary systems first developed

Desktop video conferencing more common

Does Desktop video conferencing deliver value? (research on your own and discuss)

WEBCASTING

End of Lecture 2


© 2014 Jake Messinger (all rights reserved)
Dept of Decision and Information Sciences (MIS)
Bauer College of Business
University Of Houston