Showing posts with label CCMS and Monitoring. Show all posts
Showing posts with label CCMS and Monitoring. Show all posts

Wednesday, April 20, 2011

SAP Standard Background Jobs and Job Monitoring

There are a range of background jobs that must run regularly in a production system, to, for example, delete obsolete jobs or spool objects. You should schedule the following jobs in the job definition transaction SM36, so that they are automatically started at the specified frequency:
 

Program Name / Job Name

Repetition Frequency

Description

RSBTCDEL /
SAP_REORG_JOBS

daily

Deletes old background jobs

RSBDCREO /
SAP_REORG_BATCHINPUT

daily

Deletes old batch input folders

RSSNAPDL /
SAP_REORG_ABAPDUMPS

daily

Deletes old ABAP dumps

RSBPSTDE /
SAP_REORG_JOBSTATISTIC

monthly

Deletes old job statistics

RSBPCOLL /
SAP_COLLECTOR_FOR_JOBSTATISTIC

daily

Creates job statistics

RSCOLL00 /
SAP_COLLECTOR_FOR_PERFMONITOR

hourly

Starts data collectors for ABAP statistics records

RSN3_STAT_COLLECTOR/

SAP_COLLECTOR_FOR_NONE_R3_STAT

hourly

Starts data collectors for non-ABAP statistics records (Distributed Statistics Records, DSRs)

RSXMILOGREORG /
SAP_REORG_XMILOG

weekly

Deletes obsolete entries in the XMI log

RSAL_BATCH_TOOL_DISPATCHING/
SAP_CCMS_MONI_BATCH_DP

hourly

Starts long-running data collectors that report application-specific values to the monitoring architecture

RSPO0041/1041
SAP_REORG_SPOOL

 

daily

Deletes obsolete spool requests to reduce system load

RSPO1043

 

daily

Checks the consistency of the spooler and of the TemSe and evaluates the results if necessary

RSBTCPRIDEL/
SAP_REORG_PRIPARAMS

 

monthly

Reorganizes the print parameters across clients

 
 

You can monitor jobs in the following ways:

·        You can call the Job Selection transaction (transaction SM37) and check whether the jobs actually ran without errors.

·        You can use the job monitoring of the Alert Monitor (transaction RZ20) to monitor the jobs. You can set up job monitoring so that you are automatically notified if an error occurred.

With job monitoring, alerts are displayed if errors occurred. If you assign an "auto-reaction method" to these alerts, you are notified, for example, by SMS or by e-mail if problems occurred during the execution of the jobs.

 

SAP Netweaver Monitoring Availability

It is not possible to uniquely define availability with regard to IT components. Availability can mean the existence of a process at operating system level. Other definitions of availability can include, for example, the provision of a service within a certain time, or the average time for performing a user action.

SAP provides different mechanisms for checking availability: The following is common to the mechanisms:

  • The availability information is usually of a technical nature.

  • Once set up, the availability check is performed periodically and without user interaction.

  • The result of the check is reported in the central monitoring system.

You have the following options for monitoring availability:

  • You can use Availability Monitoring with CCMSPING to monitor ABAP and Java systems and their instances. With this option, the CCMSPING agent queries the message server in each case about which instances are reported as active. With ABAP systems, you can also have the availability of instances and logon groups monitored using a direct RFC call to the instance itself.

  • You can check availability at application level with the Generic Request and Message Generator GRMG. With this option, the central system periodically calls a GRMG application using a URL. The GRMG application performs component-specific checks and returns the result of the checks to the central system.

  • You can monitor the existence of the relevant process at operating system level. A check of this type determines a prerequisite for the availability of the component. The check is performed by the SAP programs SAPOSCOL and the agent SAPCCMSR

Execution
Check the Availability monitor for alerts. To open the monitor, proceed as described in Tasks. Depending on the affected nodes, you can react to the alert as follows:
 
Affected MTE
Meaning
Check by CCMSPING
With this monitor, you can perform availability monitoring for selected ABAP and Java systems and their instances.
<SysID> Availability: <SysID> on <Host>Instances: <SysID>
Check that the system or instance is definitely not available by attempting to log on. If this is the case, check the developer trace (dev_trace), if appropriate, and restart the system or instance (depending on the platform you are using, either with the MMC or using a start script).
J2EE Engines: Heartbeat by GRMG
The subtree for each GRMG scenario consists of two subtree types:
· Availability of the scenario (Was it possible to perform the check of the components?) with the prefix Selfmonitoring
· Availability of the monitored components
GRMG: J2EE <SysID> on <Host> Selfmonitoring: Scenario ...
If an alert occurred in this subtree, it was not possible to perform the monitoring of the J2EE Engine, availability values are therefore not delivered for the subtree below .
GRMG: J2EE <SysID> on <Host> Test: <Component>
If an alert occurred in this subtree, the monitored component is not available.
J2EE Engines: OS Processes
You can use this subtree to determine (sorted by the monitored Java systems) whether the most important processes (dispatchers, servers, and SDM) are running at operating system level.
If required processes are not running, first check the log files using the Standalone Log Viewer, and then restart the Engine.

 

 

SAP Netweaver Monitoring Components

 

Component

Description

Alert Monitor

Use the alert monitor (transaction RZ20) as the central tool for monitoring your entire system landscape. If malfunctions or problems occur, alerts are generated. These alerts are displayed in various monitors in a tree structure, and you can assign auto-reactions to them. In this way, you are informed quickly and reliably about the alert, even if you are not working in the alert monitor at the time.

If you use the alert monitor as the entry point for your central monitoring, you will use the tools listed below in this table as analysis methods, that is, after an alert is generated, you can start the appropriate tool for the alert by double-clicking the alert.

SAP NetWeaver Administrator

The SAP NetWeaver Administrator (NWA) unifies the most important administration and monitoring tools both for Java and for ABAP systems in a new, browser-based user interface. The most important advantages of the NWA are:

  • You no longer need to switch between different tools for administration, troubleshooting, and problem analysis of your entire SAP NetWeaver system landscape.

  • There is a central administration tool available to you landscape-wide for both Java and ABAP systems for starting and stopping instances, checking configuration settings and logs, and monitoring error-free functioning of components.

  • The interface follows the current guidelines for interface design, is easy-to-use, task-oriented, and complete. By using Web Dynpro, it runs in a browser.

SAP Solution Manager

Solution Monitoring in the SAP Solution Manager monitors heterogeneous system landscapes. You can monitor your systems and business processes in one tool.

The system monitoring and business process monitoring are based on the data of the CCMS monitoring infrastructure.

Operating System Monitor

You can use the operating system monitor to monitor the following operating system resources:

  • Virtual and physical memory

  • CPU

  • File system administration and physical disks

  • Network

    Transaction ST06 displays the operating system data for the local server. Transaction OS07 displays the data for the entire system.

Workload Monitor

The Workload Monitor (transaction ST03) displays statistical data for the local ABAP system for performance analysis. You can also display, for example, the total values for all instances, and compare the performance of particular instances over a period of time. The large number of analysis views and collected data allow you to quickly determine the cause of possible performance problems.

Global Workload Monitor

The Global Workload Monitor (transaction ST03G) displays statistical records for entire landscapes, both for ABAP systems and for DSR components, such as J2EE Engine, BC, and ITS. You can, for example, display the load data created when external components are called.

ICM Monitor

You can use the ICM monitor (transaction SMICM) to monitor and administer the Internet Communication Manager, which receives and sends the requests from or to the Internet (in the server role, for example, the inbound HTTP requests).

  • Overview of SAP Application Servers

  • Monitoring and Administration of the SAP Message Server

  • Displaying and Controlling Work Processes

  • Displaying and Managing User Sessions

  • Trace Functions

  • Monitoring RFC Resources on the Application Server

  • Using SAP Gateway Monitor in SAP System

  • System Log

The SAP system contains various tools for displaying detailed information on application servers, user sessions, and work processes.

If you want to work with these tools, from the SAP initial menu choose   Administration   System Administration   or transaction S002. The initial screen for system administration appears. The tools are available under Administration and Monitor.

There are also programs that you can use at operating system level to monitor the message server or the gateway.

For information about other tools for monitoring system components, see these sections.

SAP Netweaver - Overview of Monitoring Architecture


You can display monitoring data for the following components in the central monitoring system (CEN):

  • Systems based on SAP Web AS ABAP and Java

  • SAP systems with earlier releases (as of SAP R/3 3.0)

  • Non-SAP components

The data is transferred to CEN using CCMS agents or ABAP RFC connections. You can display it there directly in the Alert Monitor, or forward the data to external tools or SAP Business Intelligence for additional evaluation.

The elements of the monitoring architecture function largely independently of each other and can, particularly, be further developed and adjusted independently of each other.

The alert monitor also provides the administration methods that you need to monitor the system. These enable you to set threshold values for alerts and add or adapt auto-reaction and analysis methods. Auto-reaction methods react automatically when an alert is triggered; analysis methods enable you to examine the cause of an alert without leaving the alert monitor. The monitoring architecture also contains tools for administering and archiving the alerts.

Program/Application

Description

CCMS agents

CCMS agents are independent processes that connect a monitored component (such as a host, an ABAP instance, or a Java instance) with CEN using RFC.

Operating System Collector SAPOSCOL

SAPOSCOL is a stand-alone program that runs in the operating system background. It runs independently of SAP instances exactly once per monitored host and collects data about operating system resources.

Availability Monitoring with CCMSPING

With this type of monitoring at system or instance level, the CCMSPING agent queries the relevant message server about which instances are reported as active. You can also have the instance availability of ABAP systems monitored using a direct RFC call to the instance itself.

Monitoring with the Generic Request and Message Generator

With this type of monitoring at application level, CEN periodically calls a GRMG application using a URL. The GRMG application performs component-specific checks and returns the result of the checks to CEN.


Wednesday, February 23, 2011

FAQs on CCMS for MaxDB/SAP liveCache technology

1. What does the abbreviation CCMS stand for?

CCMS stands for Computing Center Management System.

2. What transactions does the MaxDB/liveCache CCMS incorporate?

The MaxDB/liveCache CCMS incorporates the following transactions:

ST04: DB Performance Monitor

DB02: Tables and Indexes Monitor

DB12: DBA Backup Logs

DB13 and DB13C (up to 6.40): Central DBA Planning Calendar

DB59: MaxDB/liveCache System Overview

DB50: SAP DB Assistant

LC10: liveCache Assistant

RZ20: CCMS Monitoring

DBACOCKPIT: Start DBA Cockpit (as of SAP Release 7.0)

3. What Basis Support Packages are the minimum prerequisites for MaxDB CCMS?

To ensure that the MaxDB-CCMS will work, the minimum requirement depends on your SAP release and on the database version that you use. This information in available in Note 382312 "Retrieval of CCMS for R/3 on SAP DB >= 7.2 (7.2,..., 7.6)". Refer also to the related notes specified in Note 382312.

4. Which software prerequisites must be fulfilled on the application servers so that I can use CCMS?

To be able to use the MaxDB/liveCache CCMS, the following components must be installed on each application server:

- a MaxDB client installation (Note 649814),

- a current version of the transport program 'tp'.

- the MaxDB-DBSL (Note 325402),

5. What user role must an SAP user have to be able to work with the MaxDB/liveCache CCMS?

To be able to use the MaxDB-CCMS, you require the user role SAP_BC_DB_ADMIN_SDB.

If you want to administer a liveCache using transaction LC10, refer to Note 452745 for information about which authorizations you require.

6. Why do I require the integration data in transaction DB59?

To successfully use the MaxDB/liveCache CCMS, you must maintain the integration data. For the procedure, see Note 588515.

7. Can I also include Max DB databases in the CCMS that are not included in the SAP system landscape and therefore centrally operate the computer center from a transaction in an SAP system?

Yes, by maintaining the integration data using transaction DB58 you can also include an external MaxDB/liveCache database for administration in an SAP system.

You can use transaction DB13C in WebAS 6.* to administrate external databases from a central SAP system.

As of WebAs 7.00, the administration is carried out with transaction DBACOCKPIT or DB59 --> Tools --> DBA Plannign Calendar.

8. What actions must I carry out to include an external database in the CCMS?

See Note 588515.

9. Can I use CCMS to administer a MaxDB/liveCache from an SAP system that runs using a database system other than MaxDB?

Yes, refer to Note 588515 for more information.

10. What prerequisites must be fulfilled to enable me to use the RZ20 (CCMS Monitoring Architecture) for the Max DB or the liveCache?

See Note 545030.

11. How can I analyze connection problems?

- Ensure that the software listed in section 4 of this note is installed on the application servers.

- Ensure that the minimum prerequisites of the Basis Support Packages (Note 382312) are fulfilled.

- Ensure that the integration data in transaction DB59 is correct.

- Execute a connection test in transaction DB59 and check the log.

If you cannot solve the problem yourself, open a customer message on the component BC-DB-SDB-CCM and attach the log to the message.

12. May a remote system that is included in the CCMS with DB59 have the same logical name as the local system?

No. The physical name of both databases (remote and local) can be identical. However, the logical name that is used to include the remote database in transaction DB59 must differ from the name of the local database.

If this restriction is ignored, various errors may occur within transactions DB59/DB50 and DBACockpit.

13. How can I determine whether the error has already been corrected in the CCMS?

If you want to check whether the CCMS problem that occurred in your system has been corrected, you should first search for notes. If the note search remains unsuccessful, you can search for errors in the WebPTS MaxDB error message tool (http://www.sapdb.org/webpts) in the advanced tool under component CCMS.

14. Where can I find more information about MaxDB CCMS?

Note 767598 provides information about where you can find the MaxDB/liveCache documentation online. Further information is available in the database administration handbuch in CCMS: MaxDB.

15. Where can I find more information about transaction LC10?

Note 767598 provides information about where you can find the MaxDB/liveCache documentation online. Further information is available in the database administration handbuch in CCMS: SAP liveCache technology.

FAQs on CCMS Monitoring Infrastructure

What is the CCMS Monitoring Infrastructure?

The CCMS Monitoring Infrastructure locally monitors a component using data collectors and stores the monitoring data in the main memory of the component. A central monitoring system can request this data over the network and therefore provide you with a central overview of your system landscape.

With the CCMS Monitoring Infrastructure, you can also set alerts to be triggered when certain threshold value conditions are met. You can set an auto-reaction to notify you as a result of the alert.

The CCMS monitoring infrastructure is used as a data platform by:

  • Transaction RZ22
  • The CCMS Alert Monitor
  • The Solution Manager
  • Various system management partner products

Which of my SAP systems should I use as the central CCMS monitoring system?

The central monitoring system should fulfill the following criteria:

  • It should have as high a release status for SAP Basis as possible. As the agent SAPCCM4X is only available as of a central system release status of SAP Basis 4.6C, monitoring larger system landscapes is only possible as of this release.
  • It should have as high an availability as possible. You will find central monitoring of limited use if the system is not constantly available, you must fall back on the local administration functions, and the notification from the central system if errors occur is also not functioning.

In large system landscapes, it can be useful to set up a separate, purely SAP Basis system for central monitoring. You could also use this system for other central tasks (such as the Solution Manager, Central User Administration, Transport Domain Controller, and so on). For the central monitoring system, a status of SAP Web Application Server 6.10 or 6.20 is ideal.

How much workload is caused on the central system by central monitoring?

The central monitoring system does not usually start any data collection of the component systems. The central monitoring system simply requests the monitoring data that the components have collected about themselves, using RFC. This workload is negligible.

How much workload is caused on the local component by central monitoring?

With the exception of the data transfer to the central monitoring system, no additional workload is caused on the local component, as the collection of monitoring data is activated by default.

How much workload is caused in general by monitoring?

The complete collection of performance values (for example, using the CCMS Monitoring Infrastructure, the operating system collector SAPOsCol or using the ST transactions) costs a maximum of 10% of the total performance.

Is there a limit on the number of components that I can display centrally in a monitor?

There is no restriction on the number of components that can be registered with the central monitoring system.

However, if you want to display a large number of components in a monitor, this monitor should have acceptable response time behavior for updating the data. The response time of the central monitor is influenced by the following factors:

  • How many components are displayed in the monitor?
  • How much data is requested from the respective components?
  • What is the current performance of the components?
  • What is the performance of the network connection between the components?

To improve the response time of the central monitor, you can take the following steps:

  • Connect components using CCMS agents. In this way, you can avoid possible performance bottlenecks at the level of the SAP work processes of the component.
  • Display as little data for the components as possible. Monitoring data are transferred using the RFC protocol. The more data you want to view, the longer the transfer process takes.
  • Ensure that you have the network connection between the components and the central system is as good as possible.
  • Use a central system with SAP Web Application Server 6.20. In this case, the CCMS agents automatically send their performance data to a data cache in the central system. Updating the monitor then takes only seconds, from the data cache.

Which components of the mySAP.com landscape can I monitor centrally?

You can centrally monitor an ever-increasing number of components of the mySAP.com landscape. To do this, the components must be instrumented in the CCMS Monitoring Infrastructure. SAP Note 420213, which is constantly updated, provides a current overview.

What is the CCMS Alert Monitor?

The CCMS Alert Monitor is transaction RZ20. This transaction allows you to create your own views of your system landscape, meaning that you can, for example, see the availability of all components at a glance.

How do I know what threshold values to set?

The CCMS Monitoring Infrastructure is delivered with predefined threshold values. You should check whether these threshold values are appropriate for your particular environment. Note that:

  • If the threshold values are too low, alerts are constantly triggered.
  • f the threshold values are too high, no alerts are triggered, even in critical situations.

You can either have your threshold values customized by an experienced consultant, or use the performance transactions provided by SAP to investigate the behavior of your SAP system in normal operation.

Example: You want to know what dialog response time would be measured for your system.

Use transaction ST03 or ST03N and analyze a typical working week. This performance transaction can provide you with a time profile of your response time behavior. You can then set a threshold value at an appropriate level above the average response time that you have calculated.

What is an auto-reaction?

If a threshold value condition is met, an alarm is triggered. You can an automatic action (such as a notification) as a result of this alarm. The action is known as an auto-reaction.

What is an analysis method?

SAP has defined a helpful analysis activity for every alert. This is the analysis method. You can start these directly from the CCMS Alert Monitor.

Example: The Internet Transaction Server has crashed. An alert is triggered in the CCMS Monitoring Infrastructure. The analysis method starts the browser directly with the appropriate URL for the ITS administration tool, with which you can restart the ITS.

Can I create my own methods?

You can configure and assign auto-reaction and analysis methods as you require.

What are the CCMS agents?

CCMS agents are additional programs that help you to centrally monitor your mySAP.com landscape more effectively.

There are three CCMS agents:

  • SAPCM3X: For connection SAP systems with SAP Basis 3.X to a central monitoring system.
  • SAPCCM4X: To optimize the connection of SAP systems with SAP Basis 4.X and SAP Web Application Server 6.X to a central monitoring system.
  • SAPCCMSR: To connect SAP components with no SAP Basis (such as the Internet Transaction Server) to a central monitoring system.

Is there special training for the CCMS Alert Monitor?

Find out about the customer training course ADM106. This two day course teaches you all you need to know about the CCMS Monitoring Infrastructure and the CCMS Alert Monitor. For more information about training courses, see this document this document .

How do the CCMS and the Solution Manager fit together? Are they not competing products?

No! Both transactions use the data of the CCMS Monitoring Infrastructure. The data is simply displayed differently.

In the medium term, SAP will provide a central starting point for monitoring your system landscape.

Tuesday, August 3, 2010

SAP APO liveCache Monitoring

 

External Heartbeat

The collector uses an external program (DBMCLI) to periodically check if an SAP instance is able to access liveCache. A red alert is generated when a failure occurs.

Heartbeat

liveCache can have a number of statuses. liveCache is working properly when it has a status of "WARM." By default, the status is checked every 5 minutes; however, this value is userdefinable. A red alert is generated if any other status then "WARM" is detected.

System Error Messages

At user-definable intervals (the default is every 5 minutes), the collector reads the System Error Message log file. Every error message is reported as a red alert. A customization table is used to suppress notification of specific error messages and/or to modify how error messages are evaluated.

 

Synchronous BAPI Logging & Archive Logging

A customization table is used to set the logging level ("liveCache logging switched off," "synchronous logging turned on") and the archive logging (ON/OFF) for each client. A red alert is generated as soon as the actual value deviates from the default.

Connection to liveCache

The connection to liveCache is checked according to the specified interval. In addition to the connection tests, the system also checks if the appropriate entries for LDA (primary connection for multi-connect) and LCA (secondary connection) are present in the DBCON table.

OMS Data Cache, Converter Cache

The Object Management System (OMS) stores and manages the business objects. The data blocks are stored in cache for rapid access, and the references to the physical blocks are stored in the SYSTEMDEVSAPCE. Storing the references in the so-called converted cache enables them to be accessed more quickly. After each run, the collector reports the actual OMS data cache usage values and hit ratio as well as the hit ratio for the converter cache.

These values can be used for creating a performance graphic.

 

Status Checks

The collector periodically checks the status of:

Database Full YES/NO

Diagnosis Monitoring ON/OFF

Monitoring ON/OFF

Vtrace ON/OFF

A red alert is generated as soon as the actual value deviates from the default, which is black and bold.

 

Initialization Log File

The initialization log file is refreshed every time liveCache is started and initialized. The collector searches the last log file for errors. The strings that the collector searches for in the log file are defined in a customization table. The "*" (asterisk) wildcard can be used in the string definitions. The alert threshold can be determined at the string level.

 

Functional Check / Simulation

A report is used to check liveCache's function ability. Test data is written to and read and deleted from the LC. A red alert is generated if an error occurs during the simulation. The last erroneous log file is always saved so that analyses can be performed at a later stage.

COM Routines

Application functions are implemented in liveCache as C++/COM routines. These functions enable the objects to be modified directly in memory.

Trace Level

The collector checks if at least one trace level is active. A red alert is generated when an active trace level is found.

Runtime (COM routines)

A COM routine's average runtime can provide an indication of the system's utilization. A customization table is used to define a COM routine's average response time. A red alert is generated when a defined response time is exceeded.

 

SAP APO CIF Monitoring

qRFC Ping

This connection test is twofold: First the availability of the connection to the APO clients is tested in the APO system itself, and then the so-called external CIF connections (RFC lines) of the R/3 systems that are connected to APO are tested. The collector automatically recognizes all of the connections that need to be tested and requests them periodically based on a user-definable interval (the default is 5 minutes). A red alert is generated if a connection failure is found.

qRFC Queue Length

Among other things, the number of entries per destination queue provides an indication of the APO system's processing speed. If there are problems processing a queue, the jobs may be assigned to another queue. Threshold values that determine the number of queue entries that are required to generate a yellow or red alert are specified in a customization table. The collector is scheduled to run at user-definable intervals (the default is 5 minutes).

qRFC Queue State

A queue that is being processed can show a number of statuses. The collector checks if a queue has a failure status, in which case a red alert is generated.
 

qRFC Runtime

The runtime of a queue's job also provides an indication of the APO system's processing speed. If a queue's currently active job exceeds a pre-defined runtime (e.g. APO's upload program has fallen into an endless loop or the process has halted) a yellow or red alert will be generated based on the settings. By default, a yellow alert is generated is the job's runtime exceeds 5 minutes and a red alert is generated if the job's runtime exceeds 15 minutes. Both of these threshold values are user-definable.

Logging/Debugging

The collector checks the current logging and debugging settings of the user that is registered in the RFC connection for all CIF connections. Logging and debugging default values can be individually defined for each user. A red alert is generated as soon as the actual value deviates from the default.

Consistency of the SAP Core Interface (CIF)

The collector checks the consistency of the SAP Core Interface. For the check, the data collector examines the APO-specific tables for  inconsistencies. If inconsistencies will be found the data collector generates alerts providing information about how many inconsistencies have been found and which logical systems (SAP R/3) are affected.

 

 

Monday, May 24, 2010

qRFC Monitor

 

Report: RSTRFCM1

In Release 3.0 you can execute function modules asynchronously in another R/3 System or in an external program. The function modules are not called immediately, but when the next Commit Work  is executed. Until this time point the calls are collected in an internal table. They form one Logical Unit of Work for each destination (LUW) .

With COMMIT WORK all calls are processed in an available work process in the sequence they were called in. If update records were also used before the COMMIT WORK, the asynchronous function modules are not executed until the update function modules can be processed without errors.

The transactional asynchronous tRFC guarantees that all database operations are fully executed or, if one of the function modules responds with a termination, they are fully rejected (rollback). If an LUW is executed without errors, it cannot be executed again. In some cases it may be necessary, to rollback an LUW in an application program, for example, because a table is locked. The function module RESTART_OF_BACKGROUNDTASK is used for this.

It executes a rollback and the LUW is executed again at a later time. Each LUW is assigned a Transaction ID. You only have to know the ID, if, for example, you want to execute the function module in a BACKGROUND TASK.
The function module ID_OF_BACKGROUNDTASK returns the ID of the LUW. It must be called after the 1. CALL... IN BACKGROUNDTASK and before COMMIT WORK. With the function module STATUS_OF_BACKGROUNDTASK and the ID you can check whether the LUW can be executed error free at a later time. Usually the LUW is executed immediately after COMMIT WORK in the target system specified. If it is to be started at a specific time, you can set a time with the function module START_OF_BACKGROUNDTASK. This function module is also in the LUW,and is called after 1. CALL... IN BACKGROUNDTASK and before COMMIT WORK.

Technical Implementation

All calls are contained in tables ARFCSSTATE and ARFCSDATA. Each LUW is identifed a globally unique ID. When a COMMIT WORK is executed, all the calls with this ID are executed in the target system. The system function module ARFC_DEST_SHIP transports the data to the target system and the function module ARFC_EXECUTE executes the function module calls. If an error or exception occurs in one of the calls, all of the database operations made in the previous calls are reset. (ROLLBACK) and an error message is written to file ARFCSSTATE. This error message can be evaluated in Transaction SM58.
If an LUW could be successfully executed in the target system, the function module ARFC_DEST_CONFIRM is called  that confirms the successful execution in the target system. Then the affected entries in ARFCSSTATE and ARFCSDATA are deleted.
If the target system could not be reached, because, for example, the connection is not active, the report RSARFCSE is scheduled in the background with the ID as the parameter and it is called at regualr intervals The standard values can be displayed in 'Info -> System settings' in SM58. If you want to use your own settings for each destination, you can specify these in the TRFC options in Transaction SM59. If no connection can be made in the time scheduled, the entry in ARFCSSTATE will be deleted after a length of time that you specify. The deletion is done in the background in report RSARFCDL.

Debugging

Call the transaction in the debugger and choose 'Goto -> settings' and select the field 'In Background Task:...'. The LUW will not be sent immediately.

RFC API

It is also possible to execute function modules implemented in 'C' asynchronously. (Connection type TCP/IP in SM59). The function modules are implemented in connection with the RFC-API. The function modules ARFC_DEST_SHIP and ARFC_DEST_CONFIRM are contained here and call the relevant functions.

Restrictions:

  • The Windows API does not yet support asynchronous calls.
  • The once-only execution must be guaranteed by the implementation of the function module.