Abstract : Array dataflow dependence analysis is paramount for automatic parallelization. The description of dependences at the operation and array element level has been shown to improve significantly the output of many code optimizations. But this kind of analysis has two main issues: its high cost and its scope limited to a small number of programs. We first describe a new polynomial-time algorithm, outperforming other current methods in terms of both complexity and application domain. Then, in the continuity of the work done by J.-F. Collard, we present a general framework so as to handle any kind of dependences, by possibly producing approximate dependences. The model of programs is extended to any reducible control graph and any kind of references to array elements. An original method called iterative analysis, finds relations between non-affine constraints so as to improve the accuracy of the method. Besides, we provide a criterion ensuring that the approximation obtained is the best with respect to the information gathered on non-affine constraints by other analyses. Finally, several traditional applications of dataflow analyses are adapted to our method in order to take advantage of its results, and we detail more specifically an array expansion that is a trade-off between run-time overhead, memory requirement and degree of parallelism.