Name | Data Type | Qualifiers |
Name | Data Type | Value | Scope | Flavors |
DiagnosticCreationClassName | string |
Description | string | The scoping Test's CreationClassName. | None | TRANSLATABLE= true |
Key | boolean | true | None | OVERRIDABLE= false
|
MaxLen | uint32 | 256 | None | None |
Propagated | string | CIM_DiagnosticTest.CreationClassName | None | OVERRIDABLE= false
|
DiagnosticName | string |
Description | string | The scoping Test's Name. | None | TRANSLATABLE= true |
Key | boolean | true | None | OVERRIDABLE= false
|
MaxLen | uint32 | 256 | None | None |
Propagated | string | CIM_DiagnosticTest.Name | None | OVERRIDABLE= false
|
DiagSystemCreationClassName | string |
Description | string | The scoping Test's SystemCreationClassName. | None | TRANSLATABLE= true |
Key | boolean | true | None | OVERRIDABLE= false
|
MaxLen | uint32 | 256 | None | None |
Propagated | string | CIM_DiagnosticTest.SystemCreationClassName | None | OVERRIDABLE= false
|
DiagSystemName | string |
Description | string | The scoping Test's SystemName. | None | TRANSLATABLE= true |
Key | boolean | true | None | OVERRIDABLE= false
|
MaxLen | uint32 | 256 | None | None |
Propagated | string | CIM_DiagnosticTest.SystemName | None | OVERRIDABLE= false
|
EstimatedTimeOfPerforming | uint32 |
Description | string | Estimated number of seconds to perform the Diagnostic Test indicated by the DiagnosticCreationClassName and DiagnosticName properties. After the test has completed, the actual elapsed time can be determined by subtracting the TestStartTime from the TestCompletionTime. A similar property is defined in the association, DiagnosticTest ForMSE. The difference between the two properties is that the value stored in the association is a generic test execution time for the Element and the Test. But, the value here (in DiagnosticResult) is the estimated time that this instance with the given settings would take to run the test. A CIM Consumer can compare this value with the value in the association DiagnosticTestForMSE to get an idea what impact their settings have on test execution. | None | TRANSLATABLE= true |
ModelCorrespondence | string | CIM_DiagnosticTestForMSE.EstimatedTimeOfPerforming | None | None |
Units | string | Seconds | None | TRANSLATABLE= true |
ExecutionID | string |
Description | string | The Unique identifier for an instance of Diagnostic Results. | None | TRANSLATABLE= true |
Key | boolean | true | None | OVERRIDABLE= false
|
MaxLen | uint32 | 1024 | None | None |
HaltOnError | boolean |
Description | string | When this flag is true, the test halts after finding the first error. | None | TRANSLATABLE= true |
ModelCorrespondence | string | CIM_DiagnosticSetting.HaltOnError | None | None |
IsPackage | boolean |
Deprecated | string | No value | TOSUBCLASS= false | None |
Description | string | This property is being deprecated. Tests should be grouped at the test level, not by the model. /n If this property is TRUE, then this DiagnosticResult summarizes the results from the execution of a packaged set of DiagnosticTests. The Tests in the package can be identified by following the DiagnosticResultForTest association to the test and then using the DiagnosticTest InPackage aggregation. The individual Results can be broken out by instantiating DiagnosticResult for the individual lower level tests and aggregating into the 'summary' Result using the DiagnosticResultInPackage association. | None | TRANSLATABLE= true |
LoopsFailed | uint32 |
Description | string | Since some tests may be looped, it is useful to report how many iterations passed and failed. This is relevant in analyzing transitory failures. For example, if all the errors occurred in just one of 100 iterations, the device may be viewed as OK or marginal, to be monitored further rather then failed. Note: LoopsPassed & LoopsFailed should add up to the loops completed. | None | TRANSLATABLE= true |
LoopsPassed | uint32 |
Description | string | Since some tests may be looped, it is useful to report how many iterations passed and failed. This is relevant in analyzing transitory failures. For example if all the errors occurred in in just one of 100 iterations, the device may be viewed as OK or marginal, to be monitored further rather then failed. Note: LoopsPassed & LoopsFailed should add up to the loops completed. | None | TRANSLATABLE= true |
OtherLoopControlDescription | string |
Description | string | Provides additional information for LoopControl when its value is set to 1 ('Other'). | None | TRANSLATABLE= true |
ModelCorrespondence | string | CIM_DiagnosticSetting.OtherLoopControlDescription, CIM_DiagnosticResult.LoopControlParameter | None | None |
OtherStateDescription | string |
Description | string | When "Other" (value=1) is entered in the TestState property, OtherStateDescription can be used to describe the test's state. | None | TRANSLATABLE= true |
ModelCorrespondence | string | CIM_DiagnosticResult.TestState | None | None |
PercentComplete | uint8 |
Description | string | The percentage of the test that has executed thus far, if the TestState property is set to "In Progress" or the percentage of the complete test that was executed if the TestState property is set to any of the completed states ("Passed", "Failed" or "Stopped"). Final results may be based on less than 100% coverage due to the parameters defined in DiagnosticSetting (such as QuickMode, PercentOfTestCoverage or HaltOnError). | None | TRANSLATABLE= true |
MaxValue | sint64 | 100 | None | None |
MinValue | sint64 | 0 | None | None |
Units | string | Percent | None | TRANSLATABLE= true |
PercentOfTestCoverage | uint8 |
Description | string | Specifies the test coverage performed by the diagnostic. For example, a hard drive scan test could be asked to run at 50%. The most effective way to accomplish this is for the test software to scan every other track, as opposed to only scanning the first half of a drive. It is assumed that the effectiveness of the test is impacted proportional to the percentage of testing performed. Permissible values for this property range from 0 to 100. | None | TRANSLATABLE= true |
MaxValue | sint64 | 100 | None | None |
MinValue | sint64 | 0 | None | None |
ModelCorrespondence | string | CIM_DiagnosticSetting.PercentOfTestCoverage | None | None |
Units | string | Percent | None | TRANSLATABLE= true |
QuickMode | boolean |
Description | string | When this flag is true, the test software attempts to run in an accelerated fashion either by reducing the coverage or number of tests performed. | None | TRANSLATABLE= true |
ModelCorrespondence | string | CIM_DiagnosticSetting.QuickMode | None | None |
ReportSoftErrors | boolean |
Description | string | When this flag is true, the diagnostic test reports 'soft errors'. In this context, a soft error is a message from the diagnostic, reporting a known defect in the hardware or driver configuration, or execution environment. Examples are: 'Not enough memory', 'Driver IOCTL not implemented', 'Video RAM compare failed during polygon fill test (A known defect in the video chipset)', etc. | None | TRANSLATABLE= true |
ModelCorrespondence | string | CIM_DiagnosticSetting.ReportSoftErrors | None | None |
ReportStatusMessages | boolean |
Description | string | When this flag is true, the diagnostic test reports 'status messages'. In this context, a status message indicates that the diagnostic code is at a checkpoint. Examples are: "Completion of phase 1", "Complex pattern", etc. | None | TRANSLATABLE= true |
ModelCorrespondence | string | CIM_DiagnosticSetting.ReportStatusMessages | None | None |
ResultPersistence | uint32 |
Description | string | The ResultPersistence property is a directive from a diagnostic client to a diagnostic provider. It allows the client to specify to the diagnostic service provider how long to persist the messages that result from execution of a diagnostic service. This applies to instances of DiagnosticResult. The timeout period starts upon completion of the diagnostic action described by the DiagnosticTest.
Here is a summary of the choices and behaviors for different ResultPersistence values:
0 = "No Persistence":
Setting the timer to zero tells the provider not to persist the diagnostic result. The diagnostic information is only available while the diagnostic is executing or at its conclusion.
Value > 0 and < 0xFFFFFFFF = "Persist With TimeOut":
Setting the ResultPersistenceOption to a integer will cause the DiagnosticResult to be persisted for that number of seconds. At the end of that time, the DiagnosticResult may be deleted by the diagnostic service provider.
0xFFFFFFFF = "Persist Forever":
By setting the timeout value to the very large value, 0xFFFFFFFF, the provider shall persist results forever. In this case, the client MUST bear the responsibility for deleting them. | None | TRANSLATABLE= true |
ModelCorrespondence | string | CIM_DiagnosticSetting.ResultPersistence | None | None |
Units | string | Seconds | None | TRANSLATABLE= true |
TestCompletionTime | datetime |
Description | string | The date and time when this test completed. | None | TRANSLATABLE= true |
TestStartTime | datetime |
Description | string | The date and time when this test started. | None | TRANSLATABLE= true |
TestState | uint16 |
Description | string | Describes how the test is progressing. For example, if the test was discontinued, the TestState will be "Stopped" (value=5), or if testing is currently executing, TestState will be "In Progress" (4). | None | TRANSLATABLE= true |
ModelCorrespondence | string | CIM_DiagnosticResult.OtherStateDescription | None | None |
ValueMap | string | 0, 1, 2, 3, 4, 5 | None | None |
Values | string | Unknown, Other, Passed, Failed, In Progress, Stopped | None | TRANSLATABLE= true |
TestWarningLevel | uint16 |
Description | string | Sets the level of warning messages to be logged. If for example no warning information is required, the level would be set to "No Warnings" (value=0). Using "Missing Resources" (value=1) will cause warnings to be generated when required resources or hardware are not found. Setting the value to 2, "Testing Impacts", results in both missing resources and 'test impact' warnings (for example, multiple retries required) to be reported. | None | TRANSLATABLE= true |
ModelCorrespondence | string | CIM_DiagnosticSetting.TestWarningLevel | None | None |
ValueMap | string | 0, 1, 2, 3 | None | None |
Values | string | No Warnings, Missing Resources, Testing Impacts, All Warnings | None | TRANSLATABLE= true |
TimeStamp | datetime |
Description | string | The date and time the result was last updated. | None | TRANSLATABLE= true |
ErrorCode | string[] |
ArrayType | string | Indexed | None | OVERRIDABLE= false
|
Description | string | If applicable, this string should contain one or more vendor specific error codes that the diagnostic service detected. These error codes may be used by the vendor for variety of purposes such as: fault data base indexing, field service trouble ticketing, product quality tracking, part failure history, etc. Since these codes are for vendor purposes they may assume any form. Details on suggested use cases will be left to white papers. The array of error codes has model correspondence with an ErrorCount array so the number of errors reported can be analyzed by individual error code. | None | TRANSLATABLE= true |
ModelCorrespondence | string | CIM_DiagnosticResult.ErrorCount | None | None |
ErrorCount | uint32[] |
ArrayType | string | Indexed | None | OVERRIDABLE= false
|
Description | string | Since some tests may detect transient and correctable errors such as a network diagnostic or memory test, an error count is useful to indicate the severity of the failure. This field contains an integer value of the number of errors detected by the test. The ErrorCount is an array with model correspondence to ErrorCode so that the test can report an ErrorCount on each type of error encountered. It is recommended that hard errors and correctable or recoverable errors be given different codes so that clients with knowledge of the error codes can evaluate correctable, recoverable, and hard errors independently. | None | TRANSLATABLE= true |
ModelCorrespondence | string | CIM_DiagnosticResult.ErrorCode | None | None |
LoopControl | uint16[] |
ArrayType | string | Indexed | None | OVERRIDABLE= false
|
Description | string | LoopControl, used in conjunction with LoopControlParameter, sets one or more loop control mechanisms that limits the number of times a test should be repeated with a single invocation of RunTest by a CIM client. There is an array-positional correspondence between LoopControl entries & LoopControlParameter entries. The entries in these coupled arrays of loop controls can be used in a logical OR fashion to achieve the desired loop control. For example, if a client wants to loop a test 1000 times, but quit if a timer runs out, it could set both controls into the LoopControl array as two separate entries in each array. The looping test will terminate when the first of the two ORed conditions are met.
The descriptions for each loop control are given below:
Unknown/Default (= 0)
Other (= 1) : Additional detail may be found in OtherLoopControlDescription.
Continuous (= 2) : The corresponding LoopControl Parameter is ignored and the test will execute continuously. Tests that use this control should also support DiscontinueTest.
Count(=3): The corresponding LoopControlParameter is interpreted as a loop count (uint32), indicating the number of times the test should be repeated with a single invocation of RunTest by a CIM client.
Timer (= 4) : The corresponding LoopControlParameter is interpreted as an initial value (uint32) for a test loop timer, given in seconds. The looping is terminated when this timer has lapsed.
ErrorCount (= 5) : The corresponding LoopControl Parameter is interpreted as an error count (uint32). The loop will continue until the number of errors that have occurred exceeds the ErrorCount. Note: the ErrorCount only refers to hard test errors; it does not include soft errors or warnings. | None | TRANSLATABLE= true |
ModelCorrespondence | string | CIM_DiagnosticSetting.LoopControl, CIM_DiagnosticResult.LoopControlParameter, CIM_DiagnosticResult.OtherLoopControlDescription | None | None |
ValueMap | string | 0, 1, 2, 3, 4, 5 | None | None |
Values | string | Unknown/Default, Other, Continuous, Count, Timer, Error Count | None | TRANSLATABLE= true |
LoopControlParameter | string[] |
ArrayType | string | Indexed | None | OVERRIDABLE= false
|
Description | string | Array entries contain parameters corresponding to entries in the LoopControl array, limiting the number of times a test should be repeated with a single invocation of RunTest by a CIM client. | None | TRANSLATABLE= true |
ModelCorrespondence | string | CIM_DiagnosticSetting.LoopControlParameter, CIM_DiagnosticResult.LoopControl | None | None |
TestResults | string[] |
ArrayType | string | Ordered | None | OVERRIDABLE= false
|
Description | string | TestResults stores one or more textual results from the execution of the DiagnosticTest(s) referenced by the DiagnosticCreationClassName and DiagnosticName properties. Note that this property is defined as an 'ordered' array type, to maintain the order in which the results are stored. One entry is considered a cell location in the array. Each entry is time stamped and contains information in the following format: CIMDateTime|TestName|MessageText, where:
"CIMDateTime" is the standard CIM data type with the following format: yyyymmddhhmmss.mmmmmmsutc, where
yyyy = year, e.g. 2003
mm = month (01 - 12)
dd = day (01 - 31)
hh = hour (00 - 24)
mm = minute (00-59)
ss = second (00-59)
mmmmmm = microsecond (000000-999999)
s = "+" or "-" indicating the sign of the UTC correction field
utc = offset from UTC (Universal Coordinated Time) in minutes
"TestName" is the name of the internal test that produced the message
"MessageText" is a free form string that is the 'test result'
"|" is a delimiter character. | None | TRANSLATABLE= true |