New Observation Interface

We have introduced some new features to Viewline that will increase the speed and accuracy of condition grading and CCTV survey reporting using our Windows based application.

Requirements

You will need to have the latest version of Viewline installed. Some of the new features will require an internet connection (as some features require an API* call).

Observation Input

To fully utilise the latest features select AI inference here (Fig 1) (requires internet connection).

By clicking the New Observation button you can open the Observation Input box and capture an image. If you have AI inference selected you will also see that the AI may have populated the Position (Distance) and Code fields. You will also see a highlighted text description of the full description of the Observation as it is made. This is located at the top of the Observation Input box and will be black text on a yellow background.

Viewline AI inference is very accurate on twenty classes of drainage observations and distances. Our AI does not hallucinate. It will either correctly infer or not. It may pick up several defects if they are present. Viewline AI will also pick up good joints and bad joints. If the defect requires an ‘at Joint’ attribute this will be added to the observation description. If the inference is incorrect or not made then the user can easily add the details manually.

When the user saves the Observation, the image is captured.

Enhanced Observation Input

The Enhanced Observation Input contains the same features as the standard Observation Input with a few additions. These include a video player within the Observation Input area and options to highlight the observation or defect area and include that image as part of the report.

This means the user can remain in Observation mode without swapping in and out of the main user interface.

Distance Recognition

Our recognition model is very accurate, especially when the following considerations are made:

  • Viewline AI likes contrasting text on background best. However, it works very well on just about any text.
  • At the moment it is set up to recognise metric distances. So, numbers followed by an ‘M’. For example, 05.70m. If there are other digits on the screen (dates, diameters etc) it will add those to a list headed by the highest probable inference.
  • Viewline AI gives preference to text on the edges of an image. This is normally where distances are shown, on the bottom or top of the screen, close to the perimeter of the image.

Recognised Observation Classes

For now our detection model recognises twenty main classes as listed below:

AutomatedManualAutomated
CodeLong hand DescriptionAdditions required‘at Joint’
JNJunctionPosition and diameter
CNConnectionPosition and diameter
CXIDefective Intruding ConnectionPosition, diameter and intrusion
JDJoint DisplacementMedium or Large
OJOpen JointMedium or Large
CCrackLongitudinal, Circumferential, Multiple or RadiatingYes
FFractureLongitudinal, Circumferential, Multiple or RadiatingYes
RRoots
Yes
WLWater LevelPercentage loss
DEGAttached Deposits, GreasePosition and percentage lossYes
DESSettled Deposits, FinePercentage lossYes
DERSettled Deposits, CoarsePercentage lossYes
DEEAttached Deposits, EncrustationPosition and percentage lossYes
HHolePosition
SRSealing Ring, IntrudingPosition
OBObstruction, ObstaclePercentage loss
BBrokenPosition and percentage lossYes
XPCollapsed PipePercentage loss




Our experience as drainage condition coding experts is these are the most commonly used observations. The ‘at Joint’ function is dependant on the visibility of the joint. The ‘at Joint’ function should apply non dependant of the condition of the joint (as in whether it is a JD, OJ or a good joint.

Viewline API calls

The call requires an image to be sent to our AI models and the return is in a JSON format. These are proprietary applications designed for Viewline consumption. Third party access to our API is available on request.