Reading the Logs & Reports Screen - Parallel Upgrade Tool (PUT)

Parallel Upgrade Tool (PUT) Reference

Product
Parallel Upgrade Tool (PUT)
Release Number
3.09
Published
February 2020
Language
English (United States)
Last Update
2020-02-24
dita:mapPath
ows1493317469465.ditamap
dita:ditavalPath
ows1493317469465.ditaval
dita:id
B035-5716
Product Category
Software
Teradata Tools and Utilities
To view the log file browser, click the Logs link on the lower left corner of the main PUT screen.

Logs & Reports screen

These are the main sections in the Logs & Reports screen:
Log Section Description
System Readiness Check Results Test results from SRC operations
Log files on this host TDput operations that have recently been run on this system. The last entry in this section is the Port Manager Log
Service log files Job and File Transfer logs from local and selected remote nodes
Times Reports Timing (duration) information about operations run

Log Files on this Host

The log files on this host are operation log files or sequencer logs (note the word “sequencer” or “old_sequencer” is prepended to the actual log file). When an operation is started, a sequencer process is started. This process logs everything to its sequencer log, and stdout and stderr are piped to the same log file to ensure nothing is missed. Click the Install/Upgrade Software link to open the filtered view (containing ERRORs and WARNINGs only) of the sequencer log in the same window that contains the initial Logs and Reports screen, as shown in the following example:

Filtered view of sequencer log

Filter types are displayed in bold type.

Using filtered log view. To view all entries Click Here

TDput initially displays a filtered view with only WARNING and ERROR messages. The error displayed is often caused by an earlier choice or error, which may remind you of an earlier choice that resulted in the current error. Clicking the Click Here link expands the current view to display everything the sequencer log contains while keeping your view at the top of the file.

If you are close to the end of the upgrade or on a large system, the file download can take a long time (from several seconds to several minutes), especially over a dial-up connection. Wait until the Microsoft logo in the upper right corner stops moving to find the problem, and try clicking on the last [Details] link listed. The following example shows how the screen might look after clicking the [Details] link.

Example of Details link

  • This screen starts displaying at the “BEGIN: Gather Package Info” section of the log file; it does not start at the top of the log file, so you have to scroll down to find the error. This is the plug-in that had the error.
  • Each BEGIN tag in the log file references the table of contents in the TDput operation currently running. Remember the “sequencer” process is producing all the messages in this log file.

    The entries below describe a job or process being started on the node lion_bynet. The blue entries are links to the stdout (output) file and the stderr (log) file of this process.

  • This is for job number 6014 and there is also a set of links for another job 6015.
During the Media Sources step in the table of contents, two sources were selected for available packages. The first source was /AAA_pkgs/v2r413_efix and the second source was /AAA_pkgs/v2r413, so there are two jobs running to process all available packages in each directory. The links point to the stdout (output) and stderr (log) files for these processes.
creating job 6014 (Discover Available Packages) "discoverpackages -d "/AAA_pkgs/
v2r413_efix" -s 0 ) (job_service.cpp+1211)
   job 6014 view output file on lion_bynet (job_service.cpp+1230)
   job 6014 view log file on lion_bynet (job_service.cpp+1232)
This next section is EXTREMELY HELPFUL in REDUCING CONFUSION and for dial-up or slow connections. Use this method even if you are not using dial-up. It will save you time and reduce confusion.

To view the JOB output and log files for the jobs, Teradata recommends that you open these files in a new browser by right-clicking the link and choosing open in new window. Opening these log files in a new browser allows you to toggle between two windows (to see where you came from and the actual link you are viewing). It also leaves the main sequencer log file open so you do not have to reload it when going back to view it again. If you call support, this is what you need to have displayed on your PC, so that you can read the logs to support.

The view_output_file_on_lion_bynet contains anything printed to the stdout (output) file. In most cases this file contains information the job is producing to be used by either another job or the sequencer. In our current example, it contains all available packages found in either *.pkg (streams format), *.gz (scm Master.Src format), and *.Z (GSPATCH format) directory format.

The view_log_file_on_lion_bynet contains anything printed to the stderr (log) file. Since all log messages printed by TDput go to the stderr (log) file, all the log messages appear in this file. If this job runs on an OS utility, TDput will relay the output into this log file as well. In our example, pkgtrans is being run to convert some of the packages from streams format into directory format (so that we can obtain package-specific information such as version and dependencies). In general, when attempting to locate the source of a problem, the “log file” from the failing job is the place to start.

The main sequencer log tells us that Job 6015 SUCCEEDED on node (lion_bynet) but the other Job 6014 FAILED. Look at the Job 6014 log to find out what is wrong. To open this log file in a new window, right-click the blue link in the line Job 6014 view_log_file_on_lion_bynet to see a submenu, as shown below:


Left-click Open in New Window to open a new browser window, displaying the Filtered View of the log messages for Job 6014 as shown below:


Error messages related to the job failure display. Click the link to display the unfiltered log file and automatically scroll down to the ERROR messages, as shown below:


This error occurred while trying to run a utility “pkgtrans” on a package in stream format. You can cut the command line that is running and paste it into a console window. You see the same error output that was placed in the window. Further investigation on this example shows that the file PPDE.pkg is corrupt.

The error message above “pkgtrans: ERROR: attempting to process datastream failed - bad format in datastream table-of-contents” was caused by a corrupted pkgadd streams format file in the available packages location. Replacing the file with a good one resolves this problem.

To determine the exact problem, drill down into the logs in a similar manner as the previous example. Click Logs (HTML Format) in the main PUT window to open the Logs & Reports screen.

Click the Install/Upgrade Software link.

If you had previous errors, you will see more summary sections. Below is an example of a single summary section. Scroll to the bottom to see the last summary section, then click the [Details] link to download the entire file (which takes a long time over dial-up or for a large system). You are automatically positioned at the beginning of the plug-in that has the failed job, so you may need to scroll down some more to find the Job Links.

You should see the following:


This example shows how the browser might look after clicking the [Details] link:


Scroll down to find the Job links for the failed job.

In this case, the first link you come to is not a failed job, so keep scrolling.


Here are the failed jobs:


Drill down further to determine the actual error. Right-click the link view_log_file_on_lion_bynet. You should see a menu that looks like this:


Next, click Open in New Window.


You see the Log File for the TDConvertJob on the node “lion.”

Now click the [Details] link. You should see output as shown below:


The job, TDConvertJob, is attempting to run a Teradata Conversion script. The command line used to run this script is in the log file. It is two lines above the link View_output_data_for_process (copied below):
785 INFO: the command line is “/nssoft/tdsw/05.00.00.16/bin/postupgrade/
0010.a.v2r5_pre_upg.pl” -c 04.01.03.57 -n 05.00.00.16 -p dbc,dbc
(ProcessManagerUnix.cpp+184)

Notice that the error is telling us the script 0010.a.v2r5_pre_upg.pl had an error. Look at the output from this script. To do this, view one or both of the blue links in the picture above. Open this in the same window or a new window. Remember that a dial-up connection needs to reload the previous file when you go back. Opening in a new window allows you to have all the files open at the same time.

Clicking the link View_output_data_for_process displays the following:


The output is displaying a usage statement. This implies that the arguments being passed to the script are incorrect. The -p argument claims it is optional. For upgrades using TDput, this argument is always passed to the script. The format for the argument is: COP/user/password. Since we removed the “ncrtdat” cop entry from our log-on string, the script is failing. The log-on string is replaced and the password changed to “bozo.”

The new log-on string in the dbs_logon_string.txt file is as follows:

LOGON_STRING=ncrtdat/dbc,bozo

This is the second type of failure seen for v2r4 to v2r5 upgrades. The dbc password is incorrect. In this example, the password has been changed to “bozo.” The correct password is the default, “dbc.”

The log windows have been closed. Click Retry in the Main TDput screen.

One of the status messages is “/usr/bin/sleep 360,” which is a message from the script which is run during the bteq logon. The script waits six minutes for the logon to occur. TDput does nothing and you wait for the script to finish.

If the script fails, we drill down into the log files we and find the Teradata Script output log contains the following at the end of the output:


This is a bteq log-on problem.

The error message states something is wrong with the log-on string, but does not specifically state that the password is wrong. However, we know the password is incorrect. Change it back to “dbc” and restart the script again.

This last change is successful and the upgrade continues.