📘Terminology
Last updated
Last updated
It will help you better understand transforms when we all use the same terminology. Let's have a quick review of a few items we will consistently use and talk about.
This is the raw, or untransformed data file. Typically in CSV, Excel, or Json format. The system uses this input file as it's starting point to begin the transform.
This document defines the target format for your Input file. It assigns data types, requirements, character lengths and decimal scale. It also assigns and maps the input columns to the target columns.
The second incredibly important thing this defines is how the data is moved between source and target columns, known as the Transformation Type. This tells the system if the input colum is ignored, copies, mapped through a lookup, defaulted, etc.
A transform is not possible without having a map.
The lookup values map is file that contains 3 columns: MapKey, SourceField, TargetField. When your Mapping Specification transform type is assigned to MAP
, the Lookup Values Map is used to transform the source column value to a destination column value.
StateCode
AZ
Arizona
StateCode
NC
North Carolina
PropertyCode
A12
AB001200
PropertyCode
Z19
AB001900
This is the result of processing an Input File -> through a map -> that results in the destination file. This file is guaranteed to match the format defined in the map, and it's columns and cells constrained to the restrictions (both in type, and length) provided by the map.
Once you've obtained the Target File, you'll also have access to the Transformation Report. This report contains every single issue, warning, and cell that contained erroneous or bad results.
The transformation report also contains warnings about missing map keys, missing maps, decimal overflow, etc.
Think of the Transformation Report as being the generated report on your transformation process. If the file contains no issues, the report shows a clean bill of health. If it had to truncate a value to fit the map, the report shows what value was truncated, what row it was on, and gives you the information you need to correct the problem.
Data Quality Reporting differs from Transformation Reporting as it doesn't care about the mapping specification you fed the transformation engine. It's look at the quality of your data only.
Does my data contain newline characters in the cells?
Does my data have special (non regular ASCII) characters?
Does my file contain irregular amounts?
Data Quality gives you one final preview of how clean your input data was.
Perigee Transforms contains a fully written segmentation validation engine when working with Yardi Segment Rules.
Unlike the "standard" reporting options you get that may be cryptic and only give you a single issue, our engine runs every single segment through every rule and reports on any issues. No more guessing if you've fixed the last segment issue or even worse, turning off segmentation all together to load a file.