Which Of The Following Is A Method Of Inputting Data Into A Transaction-processing System
Transaction Processing Arrangement
Introduction
Philip A. Bernstein , Eric Newcomer , in Principles of Transaction Processing (2d Edition), 2009
Existent-Time Systems
TP systems are similar to real-time systems, such as a system collecting input from a satellite or controlling a manufactory'south store flooring equipment. TP essentially is a kind of existent-fourth dimension system, with a real-time response time demand of 1 to 2 seconds. It responds to a real-world process consisting of end-users interacting with brandish devices, which communicate with awarding programs accessing a shared database. So non surprisingly, there are many similarities between the ii kinds of systems.
Real-time systems and TP systems both have predictable loads with periodic peaks. Existent-time systems usually emphasize gathering input rather than processing it, whereas TP systems more often than not do both.
Due to the variety of real-earth processes they control, real-time systems more often than not have to deal with more specialized devices than TP, such as laboratory equipment, factory shop floor equipment, or sensors and control systems in an automobile or plane.
Real-time systems generally don't need or utilize special mechanisms for atomicity and immovability. They but process the input as chop-chop as they can. If they lose some of that input, they ignore the loss and go on on running. To encounter why, consider the example of a system that collects input from a monitoring satellite. Information technology's not good if the organization misses some of the data coming in. But the organisation certainly can't end operating to go back to set things upwards like a TP system would do—the data keeps coming in and the system must do its all-time to continue processing it. By contrast, a TP surround can generally cease accepting input for a short time or can buffer the input for awhile. If there is a failure, it can end collecting input, run a recovery procedure, and then resume processing input. Thus, the fault-tolerance requirements between the two types of systems are rather different.
Existent-fourth dimension systems are mostly not concerned with serializability. In well-nigh real-time applications, processing of input messages involves no access to shared data. Since the processing of 2 different inputs does not affect each other, fifty-fifty if they're candy concurrently, they'll behave like a serial execution. No special mechanisms, such equally locking, are needed. When processing real-time inputs to shared information, the notion of serializability is as relevant as information technology is to TP. Withal, in this case, existent-time applications generally make direct use of low-level synchronization primitives for mutual exclusion, rather than relying on a full general-purpose synchronization mechanism that is subconscious behind the transaction abstraction.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781558606234000019
Introduction to Deject Calculating and Security
Vic (J.R.) Winkler , in Securing the Cloud, 2011
Networking, the Internet, and the Web
Transaction processing systems arose to meet the demand for interaction by increasing numbers of people with a single database. In this model, a single server performed ciphering and data storage while simpler client machines served for input and output. Airline reservation systems took this model and pushed connected clients to the far corners of the Globe. Initially, the client had no local storage and was connected to the server via a dedicated communications link.
Similar to transaction processing systems, client/server began with the commodity PC client simply performing input/output and the server ran the custom software. But this quickly changed as the ability of the underlying PC client proved to make some local ciphering important for overall performance and increased functionality. At present the PC was connected by a more than full general purpose local expanse network or broad area network that had other uses also. With customer/server came advances in more user-friendly interfaces.
Where we were in one case limited to interacting with computers via straight-connected carte readers and terminals, we experienced a neat untethering, first via primitive modems, later with the Internet, and more recently with pervasive high-bandwidth networking and wireless. Over again, we saw erosion in security as these conveniences made life simpler for all, including those who delighted in exploiting poor software and poor implementations. More so, much infrastructure appeared to grow organically and was less planned than a garden of weeds. The consequences? Increased operating costs and insecurity were pervasive.
If the Cyberspace brought a quiet and relatively tedious revolution, the Globe Wide Web brought an explosive revolution. Web sites sprang up on standard servers that ran standard software. With the outset Spider web sites and the first Spider web browser, it became evident that the way we were to interact with information was chop-chop changing. Simple server software, elementary browsers, and a common fix of IPs were all information technology seemed to take to brand it work. This interaction model expanded to include Spider web-based applications that let formerly stand-lonely applications be expressed via Web applied science.
Read full affiliate
URL:
https://www.sciencedirect.com/scientific discipline/article/pii/B9781597495929000014
System Recovery
Philip A. Bernstein , Eric Newcomer , in Principles of Transaction Processing (Second Edition), 2009
Publisher Summary
Transaction processing (TP) systems often are expected to exist available 24 hours per day, vii days per calendar week, to support around-the-clock business operations. Ii factors affect their availability: the mean fourth dimension between failures (MTBF) and the mean fourth dimension to repair (MTTR). Improving availability requires increasing MTBF, decreasing MTTR, or both. Computer failures occur because of ecology factors, system direction, hardware, and software. If the operating system fails, then just reboot it. For other types of software failure, the transactional middleware or database system must discover the failure of a procedure and copy it. The re-created procedure must then run a recovery procedure to reconstruct its land. Transactions simplify recovery by assuasive a server to focus on restoring its land to comprise just the results of committed transactions, rather than recovering to a state that is consistent with the concluding operations it ran. All of today's recovery mechanisms require every transaction to hold its write locks until information technology commits, to avert cascading aborts, and to ensure that undo tin can exist implemented simply by restoring an update's before prototype. The recovery director uses a cache manager to fetch pages from disk and subsequently flush them. In improver to processing commit and arrest operations, it implements a recovery algorithm to recover from arrangement failures. The recovery manager tells the enshroud manager about dependencies between dirty database pages and log pages so the enshroud manager tin can enforce the write-ahead log protocol.
Read total affiliate
URL:
https://www.sciencedirect.com/scientific discipline/article/pii/B978155860623400007X
Business organization Processes and Information Menstruum
David Loshin , in Concern Intelligence (Second Edition), 2013
Transaction Processing
Operations in a transaction processing system are interactions betwixt a user and a computer organisation where at that place is the perception of an immediate response from the system to the user'due south requests. A commonly encountered instance of transaction processing is the apply of an automated teller machine (ATM), as shown in Effigy half-dozen.one.
Although there is an appearance of a monolithic system that responds to user requests, behind the scenes each interaction may involve a large number of interdependent systems. The concept of a transaction actually incorporates this reality: A transaction is really a prepare of operations grouped together as a unit of piece of work, where no private performance takes its long-term effect unless all the operations can take effect. So, using the ATM instance, before the depository financial institution allows the ATM to disburse cash, the user's business relationship rest must be queried to meet if there are sufficient funds, the ATM must exist checked to see if it has plenty cash to satisfy the asking, the user'south business relationship must so be debited, and the cash tin can be disbursed. Yet if the result of any of these subsidiary operations indicates that servicing the request is infeasible, all the operations must be rolled back—you wouldn't want the bank to debit your business relationship without giving you the greenbacks, nor would the bank want the greenbacks to be disbursed without debiting your account. In this case the data flow follows the thread of command as it passes through the individual interaction associated with each transaction.
Read full affiliate
URL:
https://world wide web.sciencedirect.com/science/article/pii/B9780123858894000065
Transaction Processing Application Architecture
Philip A. Bernstein , Eric Newcomer , in Principles of Transaction Processing (Second Edition), 2009
iii.2 Awarding Architecture
There are iii interrelated means to decompose a TP system: by functional components, past hardware subsystems, and by operating organization processes. The decomposition by functional components is shown in Figure 3.1. Information technology consists of front end-end programs, asking controllers, transaction servers, and database systems. In the past, this was chosen a iii-tier architecture, consisting of the front-end program equally the first tier, the database organisation every bit the tertiary tier, and everything in between as the middle tier. As systems have get more layered, it is no longer clear how many tiers are nowadays. We therefore call information technology a multitier architecture.
The display device, shown in the upper left, interacts with a component that nosotros telephone call the front-cease program, which is responsible for gathering input and displaying output. Information technology captures input from forms, carte selections, and the like; validates the input; and translates information technology into a request message.
The front end-finish plan communicates with the device in a device-specific format. The types of display devices change frequently based in big part on the price of hardware to implement them. Today, a web browser running on a PC is a mutual device. In this case, the forepart-stop program is a web browser continued to a web server that uses the HTTP protocol and some variant of hypertext markup language (HTML) plus some scripting.
The front-end program may respond to some requests itself. Information technology sends other requests to the adjacent stage of the system, either by storing them on a disk in a queue or forwarding them straight for processing by the application, in a component that we call the request controller.
The request controller component guides the execution of requests. It determines the steps required to execute the request. It then executes those steps by invoking transaction servers. The awarding executing in this component usually runs as part of an ACID transaction.
Transaction servers are processes that run application programs that do the actual work of the request. They almost always execute within a transaction. The transaction server normally communicates with one or more database systems, which may be private to a detail transaction server or may be shared by several of them.
Similar whatsoever program, a TP application usually is synthetic by composing simple components into more complex ones. Elementary transaction server applications can exist equanimous into a compound application using local procedure calls, such every bit composing DebitChecking and PayLoan into PayLoanFromChecking equally we saw in Section two.2. To compose distributed components, a distributed communications mechanism is needed, such as a remote procedure phone call or asynchronous message queue. Service-oriented components and workflow mechanisms tin can likewise play a part in this composition. Chemical compound applications can then be composed into even higher level functions. This composition of components tin can accept several levels, which sometimes makes the distinction betwixt request controller and transaction server programs rather fuzzy. In such situations, a programme may perform both asking controller and transaction server functions.
This multitier TP application architecture means that the TP application itself must be split into different parts that perform these different functions: front end, request controller, and transaction server. Virtually of this affiliate is devoted to the details of what each of the components needs to do.
Multitier Architectures
TP systems ordinarily have 2 kinds of hardware subsystems, front-end systems that sit close to the display devices, and dorsum-finish systems that sit close to the databases. In a uncomplicated configuration, each front-terminate system may be a PC running a web browser connected to the Internet, and the back-end arrangement may be a unmarried machine such every bit a shared memory multiprocessor running a web server and a database management system. In complex configurations, both the forepart-terminate and back-end systems may contain many machines. For case, a front-finish system may have multiple machines that support a large number of devices in a retail store. A back-end system may be a big server farm that supports hundreds of stores, with different machines running different applications, such as finance, social club processing, shipping, and human resources.
A major architectural issue in TP systems is how to map the functional components of Figure 3.1 into processes on front-end and back-end systems. One natural fashion is to have each part run in a separate kind of process:
- ■
-
The front-end program runs in a separate process, typically either a web browser or custom software to control relatively depression-function end-user devices. On large systems, split up front end-cease machines are dedicated to front-end programs. On small systems, they run on the same back-end auto as other components.
- ■
-
Each request controller runs in a separate process and communicates with the front end-end programs via messages. Information technology unremarkably runs on a dorsum-end system.
- ■
-
Each transaction server runs every bit a process on a dorsum-cease system, preferably colocated on the same machine or local area network every bit the database system that it most frequently accesses. It communicates with request controllers and other transaction servers via messages.
- ■
-
Each database system runs as a procedure on a back-terminate system.
About modernistic TP applications are structured in this multitier architecture to become the post-obit benefits in a distributed computing environment:
- ■
-
Flexible distribution: Functions tin can be moved effectually in the distributed system without modifying application programs, considering the dissimilar functions already are separated into contained processes that communicate by exchanging messages.
- ■
-
Flexible configuration: Processes tin be located to optimize performance, availability, manageability, and then on.
- ■
-
Easier scale-out: The distribution and configuration flexibility makes it easier to scale out a organization past adding more server boxes and moving processes to them.
- ■
-
Flexible command: Each functional component tin be independently controlled. For example, one can control the relative speeds of transaction servers by varying the number of threads in those servers without affecting the front-end plan or request controller functions, which are running in split up processes.
- ■
-
Easier operations: In a big arrangement, only a few people are proficient at each tier'south applications. Having them isolated makes them easier to debug and independently upgradable.
- ■
-
Fault isolation: Since the different functions are running in different processes, errors in ane role cannot corrupt the memory of other functions, which are running in split up processes.
The main disadvantage of this multitier architecture is its touch on operation. The functional components are communicating via messages betwixt processes, instead of local procedure calls within a single process. The one-time are at to the lowest degree two orders-of-magnitude slower than the latter. Since even the simplest transaction requires a round-trip between a front-end program and request controller and betwixt a asking controller and transaction server, in that location is quite a lot of message overhead in this approach.
In that location are other disadvantages of the multitier architecture due to its large number of moving parts. This leads to complexity of the design, deployment, configuration, and management of the multitier system. To mitigate these problems, vendors have been steadily improving their tools for development and system management. But there is still much room for improvement.
Due to communications overhead, information technology is mutual to combine functions in a unmarried process. For example, most database systems support stored procedures, which are application programs that execute within the database server process. 1 can use this mechanism to run transaction server programs equally stored procedures, thereby eliminating a layer of processes between request controllers and the database system and hence eliminating advice overhead. Of form, this reduces the degrees of flexibility of the multitier architecture, since it prevents transaction server programs from being distributed independently of the database server processes in which they run.
Taking this arroyo to the extreme, ane can run all the functional components of the multitier compages in a database server process. This reduces the multitier architecture to a 2-tier architecture. This was a popular arroyo in the early days of client–server calculating in the 1980s, but fell out of favor for large-scale systems due to its limited ability to scale out. Notwithstanding, as database servers are condign more functional, it is looking more appealing. We volition talk over this trend subsequently, in Section 3.vii.
Service-Oriented Architecture
In addition to the multitier application architecture, application blueprint methodologies play a role in the structure of TP applications. Service-oriented architecture (SOA) is i such design methodology, which was discussed in Chapter 1. In SOA, the designer identifies a service that a business organization provides for its customers and partners. The designer maps this business service to a software service, which is an performance. Typically, a set of related operations are grouped together in a service interface. Each operation in a service interface is implemented equally a software component that can be invoked over a network by sending it a bulletin. In SOA, operations are intended to be relatively contained of each other, so they can be assembled into applications in dissimilar combinations, connected past different message patterns.
In a TP organization, a service tin implement a transaction or a step within a transaction. That is, it tin can play the role of a request controller or transaction server. In either instance, it is invoked past sending a message to the service. In this sense, the notion of a service is nicely aligned with multitier TP system architecture.
This alignment between SOA and TP depends simply on the fact that SOA decomposes applications into contained services. It does not depend on the particular engineering that is used to ascertain service interfaces or to communicate between services, such every bit RPC or Web Service standards.
Object-Oriented Design
Some other popular application design methodology that plays a role in the structure of TP applications is object-oriented design. Object-oriented design offers a different perspective than SOA, focusing on modeling things rather than functions.
Object-oriented blueprint maps nicely onto the TP application architecture of Effigy iii.one as shown in Effigy three.2. In this style of design, i starts past defining business objects, which are the elementary types of entities used past the business. In programming terms, each business organization object corresponds to a class in an object-oriented programming language, such as C++, Coffee, C#, or Visual Basic. It encapsulates the unproblematic operations on that blazon of entity, called methods. Typically, these methods change slowly, considering they correspond to types of existent-world objects whose behavior has been well-established for a long fourth dimension. For example, the following could exist divers equally concern objects:
- ■
-
Customer: It supports methods to create a new customer, change address, modify phone number, and return customer information in several different formats.
- ■
-
Loan Account: It supports methods to create a new loan business relationship, increase the amount owed, credit the corporeality paid, and associate the loan account with a different customer.
- ■
-
Credit History: It supports methods to create a credit history for a given client, add together a credit event (such as a loan or loan payment), and return all its credit events for a given fourth dimension period.
Subsequently defining the business objects in an application, one defines business rules, which are actions that the business performs in response to things that happen in the real globe. For example, the concern rule for opening a new loan might involve creating a new customer object (if this is a new customer), checking the customer'due south credit history, and if the credit history is satisfactory, and so creating an account. Business organisation rules change more frequently than business objects, considering they reflect changes in the style the business operates in the real world. Information technology is therefore useful to programme business organization rules in modules that are separate from business objects.
Ane can map this object-oriented application design onto TP application architecture past running business objects every bit transaction servers and business concern rules as request controller programs. This is an efficient architecture, since business organization objects make frequent access to the database that stores the object's state and can exist colocated with the database. It is too a flexible construction, since business organisation rules tin can be changed within request controllers without affecting the business objects (i.e., transaction servers) that they telephone call.
Applications created using objects can exist service-enabled to participate in an SOA. Externally callable methods of an object-oriented application are adept candidates for services. Services might expose only portions of the functionality of the objects through the service interface.
Simple Requests
In this chapter, we'll focus on simple requests. A simple request accepts 1 input message from its input device (a display device or specialized device such as an ATM), executes the transaction, and sends one bulletin back to the input device. Examples are making a bank account deposit, placing an order, or logging a shipment. Each simple asking is independent of every other simple request.
A given user interaction may actually crave a sequence of related requests. For case, a user might want to arrange a trip, which requires reserving airline seats, reserving a car, and reserving a hotel room. A travel spider web site may offer this every bit 1 request, fifty-fifty though it may actually run as three separate requests. We'll expect at multi-request interactions in Chapter 5. In this chapter, we'll assume that all requests are simple—1 bulletin in and 1 message out.
The next 3 sections, Sections 3.3 through 3.five, cover the master components of TP application architecture: forepart-end programs, request controllers, and transaction servers. They look at both the application's functions and issues related to building the underlying component. Section iii.6 looks at transactional middleware that provides support for these components. Department iii.7 revisits the two-tier versus iii-tier system models, exploring in more detail the decision to group front-cease programs, asking controllers, and transaction servers into the database server process.
Read full affiliate
URL:
https://www.sciencedirect.com/science/article/pii/B9781558606234000032
Transaction Processing Systems
Sasan Rahmatian , in Encyclopedia of Information Systems, 2003
C External Physical View of a TPS
Recall that the physical view of a TPS focuses on the engineering science used in implementing it. Thus the external physical view is concerned with all technologies used in providing input and producing output. Rather than list the various input/output technologies, nosotros will hash out the broader categories into which TPS input/output technologies commonly autumn:
- •
-
None. This is the oldest TPS interface. The direct contiguous communication betwixt the system and its customers without whatever technology used as the medium however works in some industries, such as retail.
- •
-
Mail. This is the second oldest model, where the traditional postal system is used for sending orders and payments, and receiving invoices.
- •
-
Fax. While very popular during the past 20 years, fax is a viable TPS technology only in sure ways, such as sending in orders.
- •
-
Phone. This is also a traditional TPS technology that works more than for the front end of the TPS cycle, namely, the customer'southward preliminary request for functionality, toll, availability, and transaction alternatives.
- •
-
Electronic. This is the most advanced course of communication. As such, it takes two different forms: gratuitous-format, and structured.
- •
-
Complimentary-format. E-mail communication falls in this category. Because electronic mail contents are not governed past any standardized formatting rules, e-mail is non the optimal style of conducting client-organization interactions.
- •
-
Structured. Electronic data interchange (EDI) and Internet-based electronic commerce (whether business organization to business or business to consumer) fall in this category. In either case, predesigned procedures are used to regulate the electronic flow of information between the client and the organization. When the customer-arrangement advice is completely structured, it tin exist automated. Automated client service representative systems and automated teller machines are examples of this principle.
Read full chapter
URL:
https://www.sciencedirect.com/scientific discipline/article/pii/B0122272404001866
Transaction Processing Abstractions
Philip A. Bernstein , Eric Newcomer , in Principles of Transaction Processing (Second Edition), 2009
Scalability Techniques
Several abstractions are needed to aid a TP system scale up and scale out to handle large loads efficiently, including caching, resource pooling, and data sectionalization and replication. Using these abstractions improves the power of a TP system to share admission to data.
Caching is a technique that stores a copy of persistent data in shared memory for faster access. The major benefit of caching is faster access to data. If the true value of the data in its permanent home needs to be updated, then synchronization is required to proceed the enshroud values passably up to date.
Resource pooling is a machinery that reuses a resource for many client programs, rather than creating a new resource for each plan that needs the resource. For example database connections tin be pooled. A database connection is allocated to a plan when it needs to use the database and returned to the pool when a programme'south job is completed.
Division is a technique for improving scalability by segmenting resources into related groups that can be assigned to different processors. When a resource type is partitioned, the TP system routes requests for the resource to the partition that contains information technology. For example, if a database is partitioned, an admission to a information item is routed to the database sectionalization that contains the data item.
Replication is a technique for improving scalability by spreading the workload beyond multiple identical servers. Clients can either push their work onto particular servers or enqueue the work and take the servers pull from the queues. A client may have affinity for a server that has cached data that information technology frequently accesses, in which case information technology prefers sending its workload to that server. Replication tin also be used to improve availability by using backup replicas to handle the load of a failed replica. One major challenge of replication is to keep updatable replicas mutually consistent at an affordable cost.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781558606234000020
Designing a Warehouse
Lilian Hobbs , ... Pete Smith , in Oracle 10g Data Warehousing, 2005
two.1.1 Don′t Use Entity Relationship (Due east-R) Modeling
The typical arroyo used to construct a transaction-processing system is to construct an entity-relationship (E-R) diagram of the business. It is then ultimately used equally the basis for creating the physical database design, because many of the entities in our model go tables in the database. If yous have never designed a data warehouse before but are experienced in designing transaction-processing systems, then you will probably call up that a data warehouse is no unlike from any other database and that you can use the same arroyo.
Unfortunately, that is not the case, and warehouse designers volition quickly discover that the entity-relationship model is not really suitable for designing a data warehouse. Leading authorities on the subject, such as Ralph Kimball, advocate using the dimensional model, and nosotros accept establish this arroyo to be platonic for a data warehouse.
An entity-human relationship diagram tin show us, in considerable item, the interaction between the numerous entities in our system, removing back-up in the organisation whenever possible. The result is a very flat view of the enterprise, where hundreds of entities are described forth with their relationships to other entities. While this approach is fine in the transaction-processing earth, where we require this level of detail, it is far as well complex for the data warehouse. If you ask a database administrator (DBA) if he or she has an entity-relationship diagram, the DBA will probably respond that he or she did one time, when the system was first designed. But due to its size and the numerous changes that accept occurred in the system during its lifetime, the entity-human relationship diagram hasn′t been updated, and information technology is now only partially accurate.
If we use a different approach for the data warehouse, one that results in a much simpler picture, and then it should exist very easy to keep it upward-to-date and as well to give information technology to end users, to help them sympathize the data warehouse. Another factor to consider is that entity-relationship diagrams tend to result in a normalized database blueprint, whereas in a information warehouse, a denormalized pattern is oft used.
Read full chapter
URL:
https://world wide web.sciencedirect.com/scientific discipline/article/pii/B9781555583224500047
The Concern Demand for Data, Data, and Analytics
Rick Sherman , in Concern Intelligence Guidebook, 2015
The Roles of BI and Operational Systems
To sympathize the role of a BI system versus a transaction processing system, start with data—at that place is a big difference between merely capturing data and using it for analysis. Capturing data means converting or translating information technology to a digital form. For instance, when y'all scan a printed bar code at the grocery store checkout, it captures data on the item'south price. When you use your smartphone to scan the QR lawmaking on a moving picture poster, it captures that data and sends yous to a web video with a preview of the pic. When you use your phone to scan a check for online deposit, and then you primal in the deposit amount, that information is captured and sent to the bank.
Captured information is input into operational systems. These are the systems that perform the day-to-day transactions of a business organisation, such equally deposits in a bank, sales in a store, and course registrations in a academy. These are too called transaction processing systems, because it is where the enterprise processes its transactions.
Dissimilarity this with business intelligence, which is the applications used for reporting, querying, and analytics. This category also includes data warehousing, which is the database backbone to support BI applications. A data warehouse is non the only information source used by BI, but it remains a primal ingredient to an enterprise-wide solution providing clean, consistent, conformed, comprehensive, and current information, rather than yet another data silo.
Traditionally, operational systems had simply limited reporting capabilities. This is understandable; they are congenital for transactional processing, later on all. An enterprise'due south data can exist scattered across many dissimilar operational systems, making it very hard to gather and consolidate. In a big medical eye, for example, one arrangement could process data related to patient accounts, another could be delegated to medical enquiry data, and another used for man resources. The systems are congenital to process big amounts of data, and practise it chop-chop.
The answer to the need for better reporting was BI—and it is notwithstanding the answer. Simply there is besides a middle footing, chosen operational BI, which causes a lot of defoliation.
Read full chapter
URL:
https://world wide web.sciencedirect.com/scientific discipline/article/pii/B9780124114616000010
Master Data Synchronization
David Loshin , in Principal Data Direction, 2009
11.4.1 Application Infrastructure Synchronization Requirements
Clearly, many MDM environments are not specifically designated equally tightly coupled transaction processing systems, which means that the requirements are going to be determined in the context of the degree to which in that location is simultaneous main data admission, likewise as how that is impacted by our dimensions of synchronization. Therefore, the process for determining the requirements for synchronization begins with request a number of questions:
- 1
-
Is the master data asset supporting more of a transactional or an analytical environment? A transactional system will require a greater caste of synchronization than ane that is largely used solely for analytics.
- two
-
How many applications volition have simultaneous access to principal data? An environment with many applications touching the master information repository will probably take more data dependencies requiring consistency.
- three
-
What is the frequency of modifications to master data? Higher frequency may require greater assurance of coherence.
- 4
-
What is the frequency of data imports and loads? This introduces requirements for data currency and timeliness of delivery.
- 5
-
How tightly coupled are the business organisation processes supported by different chief information sets? Siloed principal object repositories may be of utilise for specific concern objectives, but fully integrated business processes that operate on collections of master objects will demand greater degrees of synchrony.
- half-dozen
-
How chop-chop do newly introduced information need to be integrated into the primary surround? Rapid integration volition suggest a loftier degree of timeliness and currency.
- 7
-
What is the volume of new master records each 24-hour interval?
- 8
-
How many applications are introducing new principal records? (This question and the previous ane focus on the potential for big numbers of simultaneous transactions and how that would impact the need for coordination.)
- 9
-
What are the geographical distribution points for principal data? In a replicated surround, this would impose coherence and consistency constraints that may be jump by network data transfer bandwidth.
This is but a beginning; the answers to these questions will trigger additional exploration into the nature of coordinating access to master information. The degree of synchronization relates to the ways that applications are using master data. When there is a high overlap in use of the same avails, at that place will exist a greater need for synchrony, and a depression degree of overlap reduces the need for synchrony.
Information technology is valuable to annotation that one of the biggest issues of synchrony in an MDM environment leveraging a services-oriented architecture (SOA) is the caste to which end betoken systems can swallow updates. In this asynchronous paradigm, if some end point systems cannot consume updates, for instance, or in that location's a huge backlog of updates, you may have both functioning and consistency bug.
Read total affiliate
URL:
https://www.sciencedirect.com/scientific discipline/commodity/pii/B9780123742254000114
Which Of The Following Is A Method Of Inputting Data Into A Transaction-processing System,
Source: https://www.sciencedirect.com/topics/computer-science/transaction-processing-system
Posted by: herbertthead1935.blogspot.com
0 Response to "Which Of The Following Is A Method Of Inputting Data Into A Transaction-processing System"
Post a Comment