Also, most of the examples show the symbol being given an initial value. No variable has to be given an initial value, it is only done in the examples to show that a variable accepts one.
When giving integer values (e.g., operands, arguments): If you exceed the maximum integer, then the integer will "wrap around" to its lowest value and will add from there. If you exceed the minimum integer, then the integer will be given its maximum value and will subtract from there.
When using integers in an operation (e.g, 2147483647 + 10): If the resultant integer exceeds the maximum or minimum integers, Cog will change the result to 2147483648 - note that this number is unsigned and is 1 greater than the maxint.
Precision is a quality of floating-point numbers that describes how many significant digits a variable type can hold. As mentioned above, precision is not constant. The more decimal digits you have, the less precise the number will be.
Cog's floats have a maximum precision of about 9 digits - which means that at the most, a float can store and use 9 significant digits with no loss of precision. The minimum precision is 6 digits (with all significant digits being to the right of the point), but remember that Cog only displays the first 6 decimal digits.
This precision factor is caused by the way fractional numbers are stored in memory. Unlike positive numbers, they are stored approximately - and after so many digits of precision, the accuracy of the stored number will fail. Since floating-point numbers cannot always be trusted, we have to be careful with what operations we use them in.
PrintFlex(111.222333 * 1000);
"111222.335958" prints.
PrintFlex(111222333 % 1000);
"336.000000" prints.
In both examples, you can see the limit of the precision of floating-point numbers.