What service allows a company to use large databases and sophisticated software even if the company Cannot afford a powerful computer?

Platform as a service (PaaS) is a complete development and deployment environment in the cloud, with resources that enable you to deliver everything from simple cloud-based apps to sophisticated, cloud-enabled enterprise applications. You purchase the resources you need from a cloud service provider on a pay-as-you-go basis and access them over a secure Internet connection.

Like IaaS, PaaS includes infrastructure—servers, storage, and networking—but also middleware, development tools, business intelligence (BI) services, database management systems, and more. PaaS is designed to support the complete web application lifecycle: building, testing, deploying, managing, and updating.

PaaS allows you to avoid the expense and complexity of buying and managing software licenses, the underlying application infrastructure and middleware, container orchestrators such as Kubernetes, or the development tools and other resources. You manage the applications and services you develop, and the cloud service provider typically manages everything else.

What service allows a company to use large databases and sophisticated software even if the company Cannot afford a powerful computer?

Hosted applications/apps Development tools, database management, business analytics Operating systems Servers and storage Networking firewalls/security Data center physical plant/building

Common PaaS scenarios

Organizations typically use PaaS for these scenarios:

Development framework. PaaS provides a framework that developers can build upon to develop or customize cloud-based applications. Similar to the way you create an Excel macro, PaaS lets developers create applications using built-in software components. Cloud features such as scalability, high-availability, and multi-tenant capability are included, reducing the amount of coding that developers must do.

Analytics or business intelligence. Tools provided as a service with PaaS allow organizations to analyze and mine their data, finding insights and patterns and predicting outcomes to improve forecasting, product design decisions, investment returns, and other business decisions.

Additional services. PaaS providers may offer other services that enhance applications, such as workflow, directory, security, and scheduling.

Advantages of PaaS

By delivering infrastructure as a service, PaaS offers the same advantages as IaaS. But its additional features—middleware, development tools, and other business tools—give you more advantages:

Cut coding time. PaaS development tools can cut the time it takes to code new apps with pre-coded application components built into the platform, such as workflow, directory services, security features, search, and so on.

Add development capabilities without adding staff. Platform as a Service components can give your development team new capabilities without your needing to add staff having the required skills.

Develop for multiple platforms—including mobile—more easily. Some service providers give you development options for multiple platforms, such as computers, mobile devices, and browsers making cross-platform apps quicker and easier to develop.

Use sophisticated tools affordably. A pay-as-you-go model makes it possible for individuals or organizations to use sophisticated development software and business intelligence and analytics tools that they could not afford to purchase outright.

Support geographically distributed development teams. Because the development environment is accessed over the Internet, development teams can work together on projects even when team members are in remote locations.

Efficiently manage the application lifecycle. PaaS provides all of the capabilities that you need to support the complete web application lifecycle: building, testing, deploying, managing, and updating within the same integrated environment.

The Database Environment

Jan L. Harrington, in Relational Database Design and Implementation (Fourth Edition), 2016

Client/Server

Client/server architecture shares the data processing chores between a server—typically, a high-end workstation but quite possibly a mainframe—and clients, which are usually PCs. PCs have significant processing power and therefore are capable of taking raw data returned by the server and formatting the result for output. Application programs and query processors can be stored and executed on the PCs. Network traffic is reduced to data manipulation requests sent from the PC to the database server and raw data returned as a result of that request. The result is significantly less network traffic and theoretically better performance.

Today’s client/server architectures exchange messages over LANs. Although a few older Token Ring LANs are still in use, most of today’s LANs are based on Ethernet standards. As an example, take a look at the small network in Figure 1.3. The database runs on its own server (the database server), using additional disk space on the network attached storage device. Access to the database is controlled not only by the DBMS itself, but by the authentication server.

What service allows a company to use large databases and sophisticated software even if the company Cannot afford a powerful computer?

Figure 1.3. Small LAN with network-accessible database server.

A client/server architecture is similar to the traditional centralized architecture in that the DBMS resides on a single computer. In fact, many of today’s mainframes actually function as large, fast servers. The need to handle large data sets still exists although the location of some of the processing has changed.

Because a client/server architecture uses a centralized database server, it suffers from the same reliability problems as the traditional centralized architecture: if the server goes down, data access is cut off. However, because the “terminals” are PCs, any data downloaded to a PC can be processed without access to the server.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128043998000016

Human Resource Information Systems

Michael Bedell, in Encyclopedia of Information Systems, 2003

1 Clien–server Architecture

The client–server architecture is a distributed computing system where tasks are split between software on the server computer and the client computer. The client computer requests information from the server, initiating activity or requesting information. This is comparable to a customer who orders materials from a supplier who responds to the request by shipping the requested materials. A strength of this architecture is that distributed computing resources on a network share resources, in this case a single database, among many users. Another strength of this architecture is that additional hardware can be easily added to increase computing power. In the case of the HRIS, the HR professional uses their client computer to request information appropriate to their security clearance from the server. The HRIS server computer houses the database which contains the organization's data.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122272404000861

Choosing the Right Software

Anthony C. Caputo, in Digital Video Surveillance and Security, 2010

Troubleshooting

The VMS is a client/server architecture and follows many of the same rules as any other client/server application with the possible exception of more granular software permissions and privileges options.

As with any troubleshooting, as depicted in Figure 7-10, you must first confirm there's power throughout the topology. If there are two switches and a router between the workstation and the server, make sure everything is powered and running. A simple ping test from the workstation to the server can verify clear connectivity, and if that fails, then ping the router and/or firewall. If that fails, then check the network connection at the workstation and any switch in between.

What service allows a company to use large databases and sophisticated software even if the company Cannot afford a powerful computer?

Figure 7-10. VMS login troubleshooting.

A PATHPING can also provide a network path from the workstation to the server when there are any changes made to the topology, such as switching to another router and/or firewall, that haven't been properly configured for the VMS system.

Depending on the complexity of the permissions and privileges, a quick diagnosis of authentication problems may be to just log in as another known-good account. If that's successful, then there may either be a mistakenly deleted account ID (it's happened) or the permissions were changed, making it impossible for that specific ID to log in to the system. Following the network troubleshooting suggestions in Chapter 4 will also assist in uncovering the problem.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978185617747400007X

Choosing the Right Software

Anthony C. Caputo, in Digital Video Surveillance and Security (Second Edition), 2014

Troubleshooting

The VMS is a client/server architecture and follows many of the same rules as any other client/server application, with the possible exception of more granular software permissions and privileges options.

As with any troubleshooting activity, as depicted in Figure 8.16, you must first confirm that there is connectivity between the client software and the server itself. The easiest method for determining successful connectivity is if the VMS Server software prompts for a user ID and password. This means that the server software “heard” the request from the client application. If there is no login prompt, power throughout the topology. If there are two switches and a router between the workstation and the server, make sure everything is powered and running. A simple ping test from the workstation to the server can verify clear connectivity, or if that fails, then ping the router and/or firewall. If that fails, then check the network connection at the workstation and any switch in between.

What service allows a company to use large databases and sophisticated software even if the company Cannot afford a powerful computer?

FIGURE 8.16. VMS login troubleshooting.

A pathping can also provide a network path from the workstation to the server in the event that any changes have been made to the topology, such as switching to another router or firewall, that might not have been properly configured for the VMS.

Depending on the complexity of the permissions and privileges, a quick diagnosis of authentication problems may be to simply log in as another known good account. If that’s successful, then there may either be a mistakenly deleted account ID (it has happened) or the permissions were changed, making it impossible for that specific ID to log into the system. Following the network troubleshooting suggestions in Chapter 4 will also assist you in uncovering the problem.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124200425000083

Literature Review

Oluwatobi Ayodeji Akanbi, ... Elahe Fazeldehkordi, in A Machine-Learning Approach to Phishing Detection and Defense, 2015

2.5.2 Lookup System

Lookup systems implements a client–server architecture such that the server side maintains a blacklist of known fake URLs (Li and Helenius, 2007; Zhang et al., 2006) and the client–side tool examines the blacklist and provides a warning if a website poses a threat. Lookup systems utilize collective sanctioning mechanisms akin to those in reputation ranking mechanisms (Hariharan et al., 2007). Online communities of practice and system users provide information for the blacklists. Online communities such as the Anti-Phishing Working Group and the Artists Against 4-1-9 have developed databases of known concocted and spoof websites. Lookup systems also study URLs directly reported or assessed by system users (Abbasi and Chen, 2009b).

There are several lookup systems in existence and perhaps, the most common of which is Microsoft’s IE Phishing Filter, which makes use of a client-side whitelist combined with a server-side blacklist gathered from online databases and IE user reports. Similarly, Mozilla Firefox’s FirePhish7 toolbar, and the EarthLink9 toolbar also maintain a blacklist of spoof URLs. Firetrust’s9Sitehound system stores spoof and concocted site URLs taken from online sources such as the Artists6Against 4-1-9. A benefit of lookup9 systems is that they characteristically have high measure of accuracy as they are less likely to detect an authentic site as phony (Zhang et al., 2006). They are also simpler to work with and faster than most classifier systems in terms of computational power; comparing URLs against a list of identified phonies is rather simple. In spite of this, lookup systems are still vulnerable to higher levels of false negatives in failing to identify fake websites. Also, one of the limitations of blacklist can be attributed to the small number of available online resources and coverage area. For example, the IE Phishing Filter and FirePhish tools only amass URLs for spoof sites, making them incompetent against concocted sites (Abbasi and Chen, 2009b). The performance of lookup systems might also vary on the basis of the time of day and interval between report and evaluation time (Zhang et al., 2006). However, blacklists are to contain older fake websites rather than newer ones, which give impostors a better chance of successive attack before being blacklisted. Furthermore, Liu et al. (2006) claimed that 5% of spoof site recipients become victims in spite of the availability of a profusion of web browser integrated lookup systems.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128029275000022

Server Classifications

Shu Zhang, Ming Wang, in Encyclopedia of Information Systems, 2003

I.B.2.b. Three-tier Client/server Architecture

The need for enterprise scalability challenged the traditional two-tier client/server architecture. In the mid-1990s, as applications became more complex and potentially could be deployed to hundreds or thousands of end-users, the client side presented the problems that prevented true scalability. Because two-tier client/server applications are not optimized for WAN connections, response time is often unacceptable for remote users. Application upgrades require software and often hardware upgrades on all client PCs, resulting in potential version control problems.

By 1995, three new layers of client/server architecture were proposed, each running on a different platform:

1.

Tier one is the user interface layer, which runs on the end-user's computer.

2.

Tier two is the business logic and data processing layer. This middle tier runs on a server and is often called the application server. This added middle layer is called an application server.

3.

Tier three is the data storage system, which stores the data required by the middle tier. This tier may run on a separate server called the database server. This third layer is called the back-end server.

In a three-tier application, the user interface processes remain on the client's computers, but the business rules processes are resided and executed on the application middle layer between the client's computer and the computer which hosts the data storage/ retrieval system. One application server is designed to serve multiple clients. In this type of application, the client would never access the data storage system directly.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B012227240400157X

Next-generation Internet architecture

Dimitrios Serpanos, Tilman Wolf, in Architecture of Network Systems, 2011

New networking paradigms

The trend toward heterogeneity in systems connected via the Internet has impacted the diversities of networking paradigms used in the network. While traditional communication principles based on client–server architecture still dominate the Internet, there are several other approaches on how to distribute information across the network. Some of them, for example, peer-to-peer networking, can be implemented in the existing networking infrastructure. However, others require a fundamental change in the way the network operates. The latter requires changes in the network that cannot be accommodated in the current Internet, but need to be considered for future Internet architectures.

Examples of communication paradigms that differ from client–server architectures are:

Peer-to-peer networking: Peer-to-peer (P2P) networks combine the roles of the client and the server in each of the peer nodes [134]. Figure 15-1 shows client–server communication, and Figure 15-2 shows peer-to-peer communication in contrast. Instead of distributing information from a single centralized server, all peers participate in acting as servers to which other peers can connect. Using appropriate control information, a peer can determine which other peer to connect to in order to obtain a certain piece of information. This P2P communication can be implemented using existing networking technology, as it simply requires changes to the end-system application.

What service allows a company to use large databases and sophisticated software even if the company Cannot afford a powerful computer?

Figure 15-1. Client–server communication in network.

What service allows a company to use large databases and sophisticated software even if the company Cannot afford a powerful computer?

Figure 15-2. Peer-to-peer communication in network.

Content delivery networks: Content delivery networks aim to push content from the source (i.e., server) toward potential users (i.e., clients) instead of waiting for clients to explicitly pull content. Figure 15-3 shows this process. The proactive distribution of content allows clients to access copies of content located closer. Thus, better access performance can be achieved than when accessing the original server. The use of content distribution requires that the network supports mechanisms that allow redirection of a request to a local copy. In practice, this type of anycast can be achieved by manipulating DNS entries as described in Hardie [61].

What service allows a company to use large databases and sophisticated software even if the company Cannot afford a powerful computer?

Figure 15-3. Content delivery in network.

Information fusion in sensor networks: Many sensor networks consist of low-power wireless sensors that monitor physical properties of the environment [152]. These sensor networks communicate using wireless ad hoc networks that do not provide continuous connectivity. Sensing results are transmitted between neighbors for further relay, as illustrated in Figure 15-4. In applications where data collected by multiple sensors can be aggregated, information fusion is employed (e.g., to determine the maximum observed temperature, a node can aggregate all the available thermal sensor results and compute the results of the maximum function). In such networks, access to data is considerably different from conventional client–server architectures, as direct access to the source of data (i.e., the sensor) is not possible.

What service allows a company to use large databases and sophisticated software even if the company Cannot afford a powerful computer?

Figure 15-4. Information fusion in network.

Delay-tolerant networking: Delay-tolerant networks consist of network systems that cannot provide continuous connectivity [51]. Application domains include vehicular networks, where vehicles may be disconnected for some time, and mobile ad hoc networks in general. Protocols used in delay-tolerant networks are typically based on a store-and-forward approach, as it cannot be assumed that a complete end-to-end path can be established for conventional communication. Figure 15-5 shows this type of communication. The requirement for nodes to store data for potentially considerable amounts of time requires fundamental changes in the functionality of network systems.

What service allows a company to use large databases and sophisticated software even if the company Cannot afford a powerful computer?

Figure 15-5. Delay-tolerant communication in network.

These new communication paradigms shift the fundamental requirements of the network architecture.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123744944000153

Platform Architecture

Amrit Tiwana, in Platform Ecosystems, 2014

5.4.1.6 Client–server microarchitecture

The fourth widely used app microarchitecture is client–server architecture (Figure 5.14). This arrangement evenly splits the four functional elements of an app between clients and servers. While data access logic and data storage reside on the server side, presentation and application logic reside on the client side. In practice, the application logic is often split between the client and server, although it predominantly resides on the client. This design balances processing demands on the server by having the client do the bulk of application logic and presentation. It also reduces the network intensity of an app by limiting the data flowing over the Internet to only that which is needed by the user. Placing the data access logic on the server side accomplishes this; the queries from the client are initiated from the client but executed by the server, which only sends back the results of those queries rather than the entire raw data as client-based microarchitectures do. The downside of client–server app microarchitectures is that different types of client devices must be designed to invoke the data access logic on the server side in compatible ways. Depending on how the server-side functionality is split between an app and the platform, this arrangement can potentially free up app developers’ attention to focus their attention on developing the core functionality of the app (where most end-user value is generated) and fret less about the data management aspects of the app.

What service allows a company to use large databases and sophisticated software even if the company Cannot afford a powerful computer?

Figure 5.14. Client–server app microarchitectures evenly split application functionality among clients and servers.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124080669000059

Cloud Infrastructure Servers

Caesar Wu, Rajkumar Buyya, in Cloud Data Centers and Cost Modeling, 2015

11.1.1 A Client/Server Architecture

From Figure 11.4, the mechanisms of the client/Server architecture is quite easy to understand. The software or application installed in a client machine (a PC or desktop or laptop computer) is the front end of the application. It manages local client resources, such as the monitor, keyboard, mouse, RAM, CPU, and other peripherals. If we replace it with a virtualized infrastructure, the remote virtual desk infrastructure (VDI), it will become a cloud VDI.

In comparison with mainframe terminals, the client is no longer a dumb machine. It has become a more powerful PC because it has its own computational environment. The client can be considered a customer who requests services. A server is similar to a service provider who serves many clients.

At the other side of client/server architecture is a server. The function of the server machine is to fulfill. all client requests. This means that the server-provided services can be shared among different clients. The server (or servers) is normally in a centralized location, namely a data center, but we also call it a server room or network room or LAN room or wiring closet or network storage room. It is really dependent on the size of the server fleet. The connection between clients and servers is via either a dedicated network or the Internet. Theoretically speaking, all servers are totally transparent to clients. The communication between client and server is based on standard protocols, such as Ethernet or TCP/IP. Once the clients initiate the service requests, the server will respond and execute these requests, such as data retrieval, updating, dispatching, storing, and deleting. Different servers can offer different services. A file server provides file system services, such as document, photo, music, and video files. A web server provides web content services. A storage server hosts storage services with different service levels, such as platinum, gold, silver, bronze, backup, and archive storage services. An application server supports application services, such as office applications or email. In addition, a server can also act as a software engine that manages shared resources such as databases, printers, network connectivity, or even the CPU. The main function of a server is to perform back-end tasks (see Figure 11.6).

What service allows a company to use large databases and sophisticated software even if the company Cannot afford a powerful computer?

Figure 11.6. Client/server relationship.

Among these different types of servers, the simplest server is the storage server. With a file server, such as an FTP (File Transfer Protocol) or SMB/CIFS (Server Message Block/Common Internet File System) or NFS (Network File System, Sun) server, a client may request a file or files over the Internet or LAN. It is dependent on the file size. If the size is very large, the client’s request needs a large amount of network bandwidth. This will drag the network speed down. Database, transaction, and applications servers are more sophisticated.

Since the 1980s, the number of hosting servers has been growing exponentially (see Figure 11.7). When ISC begun to survey host count in January 1993, the number of hosts was only 1,313,000; however in July 2013, the host count reached 996,230,757. It has increased almost 760 times, but the average hosting server’s utilization rate is much lower. Based on Gartner’s report in November 2008, the average utilization rate of x86 servers for most organizations was only between 7% and 15% [185]. Our experiences indicate that for some mobile content servers, the utilization rate is even below 1%. This has led to server consolidation by leveraging dramatic improvement from virtualization technology. In essence, Internet technology and service sparked the acceleration of host server growth and growing server volume has triggered server virtualization, which lays the basic infrastructure foundation for a cloud.

What service allows a company to use large databases and sophisticated software even if the company Cannot afford a powerful computer?

Figure 11.7. Internet domain hosts [184].

Now that we have clarified the concepts of client, server, and client/server architecture, in the following sections, we will take a close look at both x86 and RISC servers.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128014134000118

Reality Check

BERTHOLD DAUM, in Modeling Business Objects with XML Schema, 2003

11.1 MOTIVATION

Relational databases became a strategic data storage technology when client-server architectures in enterprises emerged. With different clients requiring different views on the same data, the ability to freely construct complex data structures from the simplest data atoms was crucial—a requirement that the classical hierarchical database systems could not fulfill. Relational technology made the enterprise data view possible, with one (big) schema describing the information model of the whole enterprise. Thus, each relational schema defines one ontology, one Universe of Discourse.

And this is the problem. Most enterprises cannot afford to be a data island anymore. Electronic business, company mergers, collaborations such as automated supply chains or virtual enterprises require that information can be exchanged between enterprises and that the cost of conversion is low. This is not the case if conversion happens only on a bilateral level, starting from scratch with every new partner.

XML represents a way to avoid this chaos. Because of its extensibility, XML allows the use of generic, pivot formats for various proprietary company formats. Usually business groups and associations define these formats. If such a format does not satisfy the needs of a specific partner completely, it is relatively easy to remedy by dialecting—extending the generic format with additional specific elements. This is why the integration of XML with relational technology is all important for enterprise scenarios. We will see that XML Schema has been designed very carefully with this goal in mind.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781558608160500136

What technique uses computers to analyze large quantities of data and help human resource managers make decisions?

Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention.

What is a key advantage of human resource information systems?

Employee database A good HRIS gives you, your HR department and your other employees access to contact information for anyone on the team. By providing a database and directory for each employee, an HRIS can engender communication between employees and departments, thus creating a more productive workplace.

What is the process by which a company seeks applicants for potential employment?

Recruitment refers to the process of identifying, attracting, interviewing, selecting, hiring and onboarding employees. In other words, it involves everything from the identification of a staffing need to filling it. Depending on the size of an organization, recruitment is the responsibility of a range of workers.

How does HR technology help organizations deliver global strategic HR activities in a more efficient way?

Automation of HR processes The implementation of technology into the HR workflow frees the professionals from a great amount of routine work. The automation of processes eliminates paperwork, speeds up the execution of many tasks, and contributes to more efficient HR performance.