Additionally , the circuits 312 are configured to execute the mask generation 805 for the mantissa bits to be represented in a 32 bit format. The fraction (i.e., mantissa) bits are limited to 23 bits as shown in the 32 bit register 605 in FIG. However, the circuits 312 allow the input exponent to be greater than 125 without a rounding circuit and/or denormalization circuit by utilizing the additional exponent bits (e.g., 1 exponent bit) that are available in the 64 bit register . Note that the single precision format only has 8 bits available for the exponent, while a number stored in double precision format has 11 exponent bits available as shown in FIG.
Unlike in an embodiment using a denomalized internal representation, the full mantissa is available for storing non-zero mantissa bits which may lead to excess precision, in accordance with the second embodiment. The output after mantissa masking 810 results in the single precision result value 450. This single precision result value 450 is stored in the 64 bit register in a 32 bit register format by using mantissa masking 810. Of course, since log is a generic intrinsic function in Fortran, a compiler could evaluate the expression 1.0+x in extended precision throughout, computing its logarithm in the same precision, but evidently we cannot assume that the compiler will do so. Unfortunately, if we declare that variable real, we may still be foiled by a compiler that substitutes a value kept in a register in extended precision for one appearance of the variable and a value stored in memory in single precision for another.
Instead, we would need to declare the variable with a type that corresponds to the extended precision format. In short, there is no portable way to write this program in standard Fortran that is guaranteed to prevent the expression 1.0+x from being evaluated in a way that invalidates our proof. In a case when the input exponent is greater than 127 but less than 149 and the result value needs to be in single precision format (i.e., 32 bits), rounding would be required for the number computed by the compute reciprocal estimate at block 520. However, the circuits 312 are configured to compute a mantissa mask based on the input value at block 910. The circuits 312 apply the mantissa mask to the mantissa (i.e., the fraction part) of the computed reciprocal estimate at block 915. The circuits 312 write the result value in the 64 bit register of the registers 315 at block 535.
Based on the particular input exponent of the input value, the circuits 312 use, e.g., 9 exponent bits to store the result value and zeros the corresponding amount bits in the mantissa. The circuits 312 may execute blocks 510, 905, 520, and 910 in parallel (i.e., concurrently or almost currently). 4 is a block diagram 400 where a double precision reciprocal estimate processes a double precision input and returns a single precision result according to a first embodiment. 4, estimate instructions avoid rounding and denormalization circuits commonly used for arithmetic operations, which reduces cost and space when building processor circuits like the processor 305.
4, the circuits 312 are configured with the circuits to perform the operations, and the circuits 312 store the output (the number/answer) in the register 315 according to the desired precision format. The circuits 312 receive an input of a first precision having a wide precision value at block 1005. The input may be a double precision value in a 64 bit format. The output with the narrow precision value may be a single precision value in a 32 bit format. Single precision denormalized values are represented in double precision non-denormalized number format . For example, Table 2 provides bit accuracy for the input exponent of the input value and how the circuits 312 restrict the bits of the mantissa to account for input exponents from −128 to −149, which exceed the single precision format of mantissa bits.
By using an extra exponent bit in the 64 bit register , the circuits 312 store a denormalized single precision result in a 64 bit register according to Table 2. Use a format wider than double if it is reasonably fast and wide enough, otherwise resort to something else. Some computations can be performed more easily when extended precision is available, but they can also be carried out in double precision with only somewhat greater effort. Consider computing the Euclidean norm of a vector of double precision numbers. By computing the squares of the elements and accumulating their sum in an IEEE 754 extended double format with its wider exponent range, we can trivially avoid premature underflow or overflow for vectors of practical lengths. On extended-based systems, this is the fastest way to compute the norm.
The circuits 312 are configured to read the input value 405. In the first embodiment, the input exponent could not be greater than 127, in which case the single precision result value 450 would have been designated as zero in the register. However, in the second embodiment, the input exponent is checked and has to be greater than 149 before the circuits 312 designate the single precision result value 450 as zero . The circuits 312 present the output as the narrow precision value with the eight exponent bits, and the valid exponent range corresponds to the eight exponent bits.
DETAILED DESCRIPTION Exemplary embodiments are configured to execute mixed precision estimate instruction computing. In one implementation, a circuit can receive an input in a double precision format, compute the estimate instruction, and provide an output as a single precision result. The single precision result can be stored in a register according to a single precision format. In the third embodiments, the circuits 312 are allowed to deviate from the exact storage format for 32 bit single precision. The combination of features required or recommended by the C99 standard supports some of the five options listed above but not all. Thus, neither the double nor the double_t type can be compiled to produce the fastest code on current extended-based hardware.
If the detects 215, 220, 425, and 430 are empty, the multiplexer 435 of the circuits 312 are configured to output the calculation of the double precision reciprocal estimate function 410 as the single precision result value 450. In at least one embodiment, a reciprocal estimate function returns a limited number of mantissa bits, e.g., 8 or 12 bits. Compile to produce the fastest code, using extended precision where possible on extended-based systems. Clearly most numerical software does not require more of the arithmetic than that the relative error in each operation is bounded by the "machine epsilon". Thus, while computing some of the intermediate results in extended precision may yield a more accurate result, extended precision is not essential.
In this case, we might prefer that the compiler use extended precision only when it will not appreciably slow the program and use double precision otherwise. The IEEE standard requires that the result of addition, subtraction, multiplication and division be exactly rounded. That is, the result must be computed exactly and then rounded to the nearest floating-point number .
The section Guard Digits pointed out that computing the exact difference or sum of two floating-point numbers can be very expensive when their exponents are substantially different. That section introduced guard digits, which provide a practical way of computing differences while guaranteeing that the relative error is small. However, computing with a single guard digit will not always give the same answer as computing the exact result and then rounding. By introducing a second guard digit and a third sticky bit, differences can be computed at only a little more cost than with a single guard digit, but the result is the same as if the difference were computed exactly and then rounded . We are now in a position to answer the question, Does it matter if the basic arithmetic operations introduce a little more rounding error than necessary?
The answer is that it does matter, because accurate basic operations enable us to prove that formulas are "correct" in the sense they have a small relative error. The section Cancellation discussed several algorithms that require guard digits to produce correct results in this sense. If the input to those formulas are numbers representing imprecise measurements, however, the bounds of Theorems 3 and 4 become less interesting.
The reason is that the benign cancellation x - y can become catastrophic if x and y are only approximations to some measured quantity. But accurate operations are useful even in the face of inexact data, because they enable us to establish exact relationships like those discussed in Theorems 6 and 7. These are useful even if every floating-point variable is only an approximation to some actual value. Estimate instructions, such as reciprocal estimate (such as, e.g., for 1/x) and reciprocal square root estimate (such as, e.g., 1/√x) are not standardized. They are frequently implemented in accordance with standard instruction sets, such as Power ISA™ of IBM®.
The publication of Power ISA™ Version 2.06 Revision B, dated Jul. 23, 2010 is herein incorporated by reference in its entirety. The state of the art has not offered mixed precision processing. Mixed precision processing refers to having an input of one precision such as double precision (e.g., 64 bit precision format) and output of a different precision such as single precision (e.g., 32 bit precision format). It is quite common for an algorithm to require a short burst of higher precision in order to produce accurate results.
As discussed in the section Proof of Theorem 4, when b2 4ac, rounding error can contaminate up to half the digits in the roots computed with the quadratic formula. By performing the subcalculation of b2 - 4ac in double precision, half the double precision bits of the root are lost, which means that all the single precision bits are preserved. Since most floating-point calculations have rounding error anyway, does it matter if the basic arithmetic operations introduce a little bit more rounding error than necessary? The section Guard Digits discusses guard digits, a means of reducing the error when subtracting two nearby numbers. Guard digits were considered sufficiently important by IBM that in 1968 it added a guard digit to the double precision format in the System/360 architecture , and retrofitted all existing machines in the field.
Two examples are given to illustrate the utility of guard digits. 8, a block diagram 800 shows the third embodiment which builds on the first and second embodiments. In the diagram 800, the circuits 312 load the input value 405 from one of the registers 315.
Since the circuits 312 are configured for mixed precision inputs and outputs, the input value 405 can be a double precision 64 bit number and/or a single precision 32 bit number. Assume that in this case, the input value is a double precision number of a form such that after the computation the output would result in a single precision denormalized number . In current implementations, double precision reciprocal estimate instructions and reciprocal square root estimate give a result for double precision inputs. In Power ISA™, double precision reciprocal or square root estimate instructions give a result for single precision inputs when a shared architected register file format is used. Builders of computer systems often need information about floating-point arithmetic. There are, however, remarkably few sources of detailed information about it.
One of the few books on the subject, Floating-Point Computation by Pat Sterbenz, is long out of print. This paper is a tutorial on those aspects of floating-point arithmetic (floating-point hereafter) that have a direct connection to systems building. The first section, Rounding Error, discusses the implications of using different rounding strategies for the basic operations of addition, subtraction, multiplication and division. It also contains background information on the two methods of measuring rounding error, ulps and relative error. The second part discusses the IEEE floating-point standard, which is becoming rapidly accepted by commercial hardware manufacturers. Included in the IEEE standard is the rounding method for basic operations.
The discussion of the standard draws on the material in the section Rounding Error. The third part discusses the connections between floating-point and the design of various aspects of computer systems. Topics include instruction set design, optimizing compilers and exception handling. Various examples have been applied for computing reciprocal estimates for mixed precision, but the disclosure is not meant to be limited. A fourth embodiment discusses mixed precision multiply-estimate instruction for computing single precision result for double precision inputs. However, in the second embodiment, for a generated result, excessive mantissa bits that are non-zero are generated by the circuits 312.
In particular, an example of may be an instruction handling mixed-mode arithmetic and processing the input similar to a double precision mantissa, and an example of may be the Power® ISA single precision floating point store. In one example, results in single precision are represented in a bit pattern corresponding to double precision in the register file for processing. A number is architecturally a single precision number only if the stored exponent matches the single precision range, and if only bits of the mantissa corresponding to bits in the architected single precision format are non-zero.
SUMMARY According to exemplary embodiments, a computer system, method, and computer program product are provided for performing a mixed precision estimate. A processing circuit receives an input of a wide precision having a wide precision value. The processing circuit computes an output in an output exponent range corresponding to a narrow precision value based on the input having the wide precision value. Round results correctly to both the precision and range of the double format. This strict enforcement of double precision would be most useful for programs that test either numerical software or the arithmetic itself near the limits of both the range and precision of the double format. Such careful test programs tend to be difficult to write in a portable way; they become even more difficult when they must employ dummy subroutines and other tricks to force results to be rounded to a particular format.
Thus, a programmer using an extended-based system to develop robust software that must be portable to all IEEE 754 implementations would quickly come to appreciate being able to emulate the arithmetic of single/double systems without extraordinary effort. In this section, we classify existing implementations of IEEE 754 arithmetic based on the precisions of the destination formats they normally use. We then review some examples from the paper to show that delivering results in a wider precision than a program expects can cause it to compute wrong results even though it is provably correct when the expected precision is used. We also revisit one of the proofs in the paper to illustrate the intellectual effort required to cope with unexpected precision even when it doesn't invalidate our programs. These examples show that despite all that the IEEE standard prescribes, the differences it allows among different implementations can prevent us from writing portable, efficient numerical software whose behavior we can accurately predict. To develop such software, then, we must first create programming languages and environments that limit the variability the IEEE standard permits and allow programmers to express the floating-point semantics upon which their programs depend.
The IEEE 754 floating-point standard guarantees that add, subtract, multiply, divide, fused multiply–add, square root, and floating-point remainder will give the correctly rounded result of the infinite-precision operation. No such guarantee was given in the 1985 standard for more complex functions and they are typically only accurate to within the last bit at best. However, the 2008 standard guarantees that conforming implementations will give correctly rounded results which respect the active rounding mode; implementation of the functions, however, is optional. On the other hand, rounding of exact numbers will introduce some round-off error in the reported result. In a sequence of calculations, these rounding errors generally accumulate, and in certain ill-conditioned cases they may make the result meaningless. This paper has demonstrated that it is possible to reason rigorously about floating-point.
The task of constructing reliable floating-point software is made much easier when the underlying computer system is supportive of floating-point. In addition to the two examples just mentioned , the section Systems Aspects of this paper has examples ranging from instruction set design to compiler optimization illustrating how to better support floating-point. The results of this section can be summarized by saying that a guard digit guarantees accuracy when nearby precisely known quantities are subtracted . Sometimes a formula that gives inaccurate results can be rewritten to have much higher numerical accuracy by using benign cancellation; however, the procedure only works if subtraction is performed using a guard digit. The price of a guard digit is not high, because it merely requires making the adder one bit wider.
For a 54 bit double precision adder, the additional cost is less than 2%. For this price, you gain the ability to run many algorithms such as formula for computing the area of a triangle and the expression ln(1 +x). Although most modern computers have a guard digit, there are a few that do not.
Although is an excellent approximation to x2 - y2, the floating-point numbers x and y might themselves be approximations to some true quantities and . For example, and might be exactly known decimal numbers that cannot be expressed exactly in binary. In this case, even though xy is a good approximation to x - y, it can have a huge relative error compared to the true expression , and so the advantage of (x + y)(x - y) over x2 - y2 is not as dramatic.
Since computing (x+y)(x - y) is about the same amount of work as computing x2- y2, it is clearly the preferred form in this case. In general, however, replacing a catastrophic cancellation by a benign one is not worthwhile if the expense is large, because the input is often an approximation. But eliminating a cancellation entirely is worthwhile even if the data are not exact. Throughout this paper, it will be assumed that the floating-point inputs to an algorithm are exact and that the results are computed as accurately as possible. The single precision input value 205 is input into a reciprocal estimate function 210, a ±zero detect 215, and a ±infinity (∞) detect 220.
A multiplexer 225 also referred to as a data selector receives input from the computed reciprocal estimate function 210, +zero, −zero, +infinity, and −infinity. The multiplexer 225 selects the desired input based on whether the zero detect 215 detects a zero and/or whether the infinity detect 220 detects infinity. If nothing is detected by the zero detection 215 and the infinity detect 220, the multiplexer 225 passes the computed value from the reciprocal estimate function 210, and this computed value is a single precision result value 230. Further, logic of the multiplexer 225 is provided in Table 1 below. This logic applies for the zero detect 215 and infinity detect 220 as will be discussed later. In computing, double precision floating point is a computer number format that occupies two adjacent storage locations in computer memory.
A double precision number, sometimes simply called a double, may be defined to be an integer, fixed point, or floating point . Modern computers with 32-bit storage locations use two memory locations to store a 64-bit double-precision number (a single storage location can hold a single-precision number). Double-precision floating-point is an IEEE 754 standard for encoding binary or decimal floating-point numbers in 64 bits .
When floating-point operations are done with a guard digit, they are not as accurate as if they were computed exactly then rounded to the nearest floating-point number. Operations performed in this manner will be called exactly rounded.8 The example immediately preceding Theorem 2 shows that a single guard digit will not always give exactly rounded results. The previous section gave several examples of algorithms that require a guard digit in order to work properly. This section gives examples of algorithms that require exact rounding. Although it depends on the system, PHP typically uses the IEEE 754 double precision format, which will give a maximum relative error due to rounding in the order of 1.11e-16. Non elementary arithmetic operations may give larger errors, and, of course, error propagation must be considered when several operations are compounded.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.