Tuesday, February 7, 2023
HomeBig DataRushing up Queries With Z-Order

Rushing up Queries With Z-Order

Z-order is an ordering for multi-dimensional knowledge, e.g. rows in a database desk. As soon as knowledge is in Z-order it’s attainable to effectively search in opposition to extra columns. This text reveals how Z-ordering works and the way one can use it with Apache Impala.

In a earlier weblog submit, we demonstrated the facility of Parquet web page indexes, which may drastically enhance the efficiency of selective queries. By “selective queries,” we imply queries which have very particular search standards within the WHERE clause, therefore they sometimes return a small fraction of rows in a desk. This could generally occur in lively archive and operational reporting use circumstances. However the model of web page index filtering that we described might solely search effectively in opposition to a restricted variety of columns. That are these columns? A desk saved in a distributed file system sometimes has partition columns and knowledge columns. Partition columns set up the information information into file system directories. Partitioning is hierarchical, which suggests some partitions are nested underneath different partitions, like the next:.

If we’ve search standards in opposition to partition columns, it implies that we will filter out entire directories. Nonetheless, in case your partitioning is just too granular, i.e., you’ve too many partition columns, then your knowledge will probably be unfold throughout loads of small information. This may backfire while you run queries that have to scan a big portion of the desk.

12 months=2020/month=03/day=01/hour=01/minute=00/country_code=…

Underneath the leaf partitions we retailer the information information, which comprise the information columns. Partition columns aren’t saved within the information since they are often inferred from the file path. Parquet web page index filtering helps us when we’ve search standards in opposition to knowledge columns. They retailer min/max statistics about Parquet pages (extra on that within the aforementioned earlier weblog submit), so with their assist we solely have to learn fractions of the file. But it surely solely works effectively if the file is sorted  (with the usage of the SORT BY clause) by a column, and we’ve a search situation on that column. We will specify a number of columns within the SORT BY clause, however we’ll sometimes get nice filtering effectivity in opposition to the primary column, which dominates the ordering.

So we’ll have nice search capabilities in opposition to the partition columns plus one knowledge column (which drives the ordering within the knowledge information). With our pattern schema above, this implies we might specify a SORT BY “platform” to allow quick evaluation of all Android or iOS customers. However what if we wished to know how effectively model 5.16 of our app is doing throughout platforms and nations?

Can we do extra? It seems that we will. There are unique orderings on the market that may additionally kind knowledge by a number of columns. On this submit, we’ll describe how Z-order permits ordering of multidimensional knowledge (a number of columns) with the assistance of a space-filling curve. This ordering allows us to effectively search in opposition to extra columns. Extra on that later.

Primary ideas

Lexical ordering

We talked about above that you could specify a number of columns within the SORT BY clause. The sequence of the kind columns within the SORT BY clause defines the group of the rows within the file. That’s, the rows are sorted by the primary column, and rows which have the identical worth within the first column are sorted by the second column, and so forth. In that sense, Impala’s SORT BY works just like SQL’s ORDER BY. This ordering known as “lexical ordering.” The next desk is in lexical order by columns A, B, and C:


To cite Wikipedia, “Z-order maps multidimensional knowledge to 1 dimension whereas preserving locality of the information factors.” By “multidimensional knowledge,” we will merely consider a desk, or a set of columns (the sorting columns) of the desk. Our knowledge will not be essentially numerical, however whether it is numerical then it’s straightforward to consider the desk rows as “knowledge factors” in a multidimensional house:

“Preserving locality” implies that knowledge factors (rows) which might be shut to one another on this multidimensional house will probably be shut to one another within the ordering. Truly, it received’t be true for all knowledge factors, however will probably be true for many knowledge factors. It achieves that by defining a “house filling curve,” which helps to order the information. A “house filling curve” is a curve within the multidimensional house that touches all knowledge factors. For instance, in a 2D house the curve seems to be like the next:

In a 3D house the curve seems to be like this:

By trying on the figures you most likely discovered why it’s referred to as Z-order. Now, what it seems to be like in a 4D house is left to the reader’s creativeness.

Be aware that the factors which might be shut to one another are principally shut to one another on the curve as effectively. This property, mixed with the min/max statistics within the Parquet web page index, lets us filter knowledge with nice effectivity.

It’s additionally essential to level out that Parquet web page indexing and Z-ordering works on completely different ranges. Which means no modifications have been launched to the reader; the algorithms described in our earlier weblog submit nonetheless work.

Use circumstances for Z-order

There are some workloads which might be extraordinarily appropriate for Z-order. For instance, telecommunications and IoT workloads. It’s because Z-order is only when the columns within the Z-order clause have related properties by way of vary and distribution. Columns with a excessive variety of distinct values are sometimes good candidates for Z-ordering.

In telecommunications workloads, it’s common to have a number of columns with the identical properties, like sender IP and phone quantity, receiver IP and phone quantity, and so on. In addition they have a excessive variety of distinct values, and the sender/receiver values aren’t correlated.

Subsequently, a desk that shops cellphone calls might be Z-ordered by “call_start_timestamp,” “caller_phone_number,” or “callee_phone_number.”

In some IoT use circumstances we’ve loads of sensors that ship telemetric knowledge, so it’s frequent to have columns for longitude, latitude, timestamp, sensor ID, and so forth, and for queries to filter knowledge by these dimensions. For instance, a question may seek for knowledge in a selected geographic area (i.e., filtering by latitude and longitude) for a time frame (e.g., a month).

Nonuse circumstances for Z-order

  • When you have columns which have some correlation between their ordering, like departure time and arrival time, then there is no such thing as a have to put each of those in Z-order as a result of sorting by departure time nearly all the time types the arrival time column as effectively. However after all, you may put (and doubtless ought to) “departure time” in Z-order with different columns that you simply need to seek for.
  • Search by columns which have just a few distinct values. In that case there’s no large distinction between lexical ordering and Z-order, however you may need to select lexical ordering for sooner writes. Otherwise you may simply partition your desk by such columns. Please observe that the variety of distinct values impacts the format of your Parquet information. Columns which have few distinct values have few Parquet pages, so web page filtering can turn into coarse-grained. To beat this you should utilize the question possibility “parquet_page_row_count_limit” and set it to twenty.000. 

Easy methods to use Z-order in Apache Impala

As we talked about earlier, with the “SORT BY (a, b, c)” clause your knowledge will probably be saved in lexical order in your knowledge information. However that is solely the default conduct; you can too specify an ordering for SORT BY. There are two orderings on the time of writing:

  • SORT BY LEXICAL (a, b, c)
  • SORT BY ZORDER (a, b, c)

Whichever ordering works higher for you depends upon your workload. Z-order is a greater general-purpose alternative for ordering by a number of columns as a result of it really works higher with a greater diversity of queries.

Let’s check out an instance that everybody can attempt on their very own. We’re going to make use of the store_sales desk from the TPC-DS benchmark:

CREATE TABLE store_sales_zorder (

  ss_sold_time_sk INT, ss_item_sk BIGINT,

  ss_customer_sk INT, ss_cdemo_sk INT,

  ss_hdemo_sk INT, ss_addr_sk INT,

  ss_store_sk INT, ss_promo_sk INT,

  ss_ticket_number BIGINT, ss_quantity INT,

  ss_wholesale_cost DECIMAL(7,2), ss_list_price DECIMAL(7,2),

  ss_sales_price DECIMAL(7,2), ss_ext_discount_amt DECIMAL(7,2),

  ss_ext_sales_price DECIMAL(7,2), ss_ext_wholesale_cost DECIMAL(7,2),

  ss_ext_list_price DECIMAL(7,2), ss_ext_tax DECIMAL(7,2),

  ss_coupon_amt DECIMAL(7,2), ss_net_paid DECIMAL(7,2),

  ss_net_paid_inc_tax DECIMAL(7,2), ss_net_profit DECIMAL(7,2),

  ss_sold_date_sk INT


SORT BY ZORDER (ss_customer_sk, ss_cdemo_sk)


I selected the columns “ss_customer_sk” and “ss_cdemo_sk” as a result of they’ve essentially the most distinct values on this desk. Since I supplied the SORT BY ZORDER clause within the CREATE TABLE assertion, all INSERTs in opposition to this desk will probably be Z-ordered. To make the measurements easier we’re setting “num_nodes” to 1. This manner we’ll have a single Parquet file and the question profile will probably be additionally easier to research.



00:SCAN HDFS [tpcds_parquet.store_sales]

   HDFS partitions=182set num_nodes=1;

clarify insert into store_sales_zorder choose * from store_sales;

WRITE TO HDFS [store_sales_zorder, OVERWRITE=false]

|  partitions=1



|  order by: ZORDER: ss_customer_sk, ss_cdemo_sk

|  row-size=100B c4/1824 information=1824 measurement=196.92MB

   row-size=100B cardinality=2.88M

Let’s check out how effectively we will question our tables by the Z-ordered columns. However earlier than that permit’s check out column statistics.

Discovering the outlier values is just too straightforward for web page filtering, so let’s seek for the common values:

choose ss_customer_sk

from store_sales_zorder

the place ss_customer_sk = 49969;


choose ss_cdemo_sk

from store_sales_zorder

the place ss_cdemo_sk = 961370;


After executing every question we will examine how environment friendly web page filtering was by trying on the question profile. Seek for the values “NumPages” and “NumStatsFilteredPages.” The latter is the variety of pages which have been pruned. I summarized our ends in the next desk:

In our instance queries we solely referred to a single column to measure filtering effectivity exactly. If we had issued SELECT * FROM retailer sales_zorder WHERE ss_cdemo_sk = 961370 then the numbers would have been 3035 for NumPages and 2776 for NumStatsFilteredPages (91.5% filtering effectivity). Filtering effectivity is proportional to the desk scan time.

We supplied an instance that may be tried out by anybody. We bought fairly good outcomes even when this instance will not be essentially the most superb for Z-order. Let’s see how Z-order can carry out in the perfect circumstances.

How a lot does Z-ordering speed up queries?

With the intention to measure the effectiveness of Z-order, we selected a deterministic methodology of measuring question effectivity, as an alternative of simply evaluating the runtimes of queries. That’s, we counted the variety of pages we might skip in Parquet information, i.e., how a lot of the uncooked knowledge within the information we might skip over with out scanning (for extra particulars on how the filtering works see the aforementioned weblog submit). This metric is strongly correlated with question runtime, however provides us extra exact, repeatable outcomes.

As we’ve talked about, Z-ordering is focused at actual workloads from, for instance, IoT or telecommunications, however first we’ll consider it on randomly generated values. We first run easy queries on uniformly distributed values taking over 5GB of house.

  • Choosing first sorting column, a:
    choose a from uniformly_distributed_table the place a = <worth>
  • Choosing second sorting column, b:
    choose b from uniformly_distributed_table the place b = <worth>

We in contrast how these queries carried out when the desk was sorted lexically and utilizing Z-ordering (ie. SORT BY LEXICAL/ZORDER (a, b)). The determine beneath reveals the share of filtered Parquet pages for the 2 queries. As anticipated, and as you may see beneath, for filtering on the primary column (coloured blue) lexical ordering all the time wins, it may well filter out extra pages. Nonetheless, Z-ordering doesn’t fall a lot behind. Subsequent, we in contrast the second columns (coloured orange), we will see that Z-ordering rocks! The filtering functionality of the second column is near the primary and a lot better than with lexical orderingwe gave up a bit of efficiency on queries that filter by the primary column, however bought an enormous efficiency enhance for queries that filter by the second column.

Now on the second determine, we kind by 4 columns. Which means we’ll surrender extra filtering energy for the primary row, however acquire comparatively loads for the opposite columns. That’s the impact of attempting to protect the four-dimensional locality: the information will not be sorted completely by any single column, however we get nice outcomes with the others which might be shut to one another.

The price of Z-ordering

In fact, there must be a price to be able to obtain such nice outcomes. We measured that the sorting of the columns when writing a knowledge set took round seven occasions longer utilizing Z-order than after we used lexicographical ordering.

Nonetheless, sorting the information is required solely as soon as when writing the information to a desk, after which we get the benefit of giant speed-ups when querying the desk.

There are additionally sure circumstances the place Z-ordering will not be efficient or it doesn’t present as a lot speed-up as proven above. That is the case when the values are both in a comparatively small vary or too sparse. The issue with a small vary is that the values will probably be too shut to one another and even be the identical for one Parquet web page. That means, Z-ordering would simply add the overhead of the sorting, however wouldn’t present any advantages in any way. When the information is just too sparse, their binary illustration would have a excessive probability to be distinct and our algorithm would find yourself sorting it lexically. Utilizing multi-column lexical sorting can be extra applicable in these circumstances.

We’ve proven the advantages of Z-ordering. However how does all of it truly work? Let’s discover out!

Behind the curtains

To dig deeper into Z-order, let’s first take into account a desk with two integer columns, ‘x’ and ‘y,’ and take a look at how they’re sorted in Z-order. As a substitute of the plain numbers, we’ll use the binary equal to finest illustrate how Z-order works.

Within the above determine, the headers of the desk present the values for every column, whereas within the cells we see the interleaved binary values. If we join the interleaved values in numerical order, we get the Z-ordered values of the 2 columns. This can be used to check the rows of two tables: (1, 3) < (2, 0).

Now we see how we will order the values of two tables, and right here’s the excellent news: it really works the identical for extra columns. We simply should interleave the bits of every row after which we might solely have to check these binary numbers. However wait! Wouldn’t that be too pricey? Nicely, sure. Luckily, we’ve a greater resolution.

Contemplate a desk with n columns, the place we need to examine two rows in Z-order. How can we optimally determine which row is larger? For that, first let’s take into consideration evaluating two binary numbers. On this case, we undergo the bits one after the other till the primary place the place the bits differ. We name this place essentially the most important dimension (MSD) of the binary values. The row having the ‘1’ bit right here can be larger than the opposite. Now let’s try this with out truly interleaving the bits. On prime of that, let’s do the comparability not just for two, however n occasions two binary numbers (two rows which have n columns). So we take the binary values and decide which column is essentially the most important (MSD) for this pair of rows. It is going to be the column for which the 2 rows differ within the highest bits. We additionally loop by means of the columns within the order outlined within the SORT BY ZORDER clause. That means, in case of equal highest MSDs, we choose the primary. As soon as we’ve the MSD (the dominating column) for this pair of rows, we simply want to check the row values of this column.

Right here is the important thing algorithm in a Python code fragment.

Working with differing types

Within the algorithm above, we described how you can work with unsigned binary integers. With the intention to use different sorts, we’ll choose unsigned integers because the frequent illustration, into which we’ll remodel all out there sorts. The transformations from the unique a and b values to their frequent illustration, a’ and b’, has the next conduct: if a < b then a’ is lexically lower than b’ concerning their bits. Thus, for ints INT_MIN can be 000…000, INT_MIN+1 can be 000…001, and so forth, and ultimately INT_MAX can be 111…111. The essential idea of getting the shared illustration for integers follows the steps beneath:

  1. Convert the quantity to the chosen unsigned sort (U).
  2. If U is greater in measurement than the precise sort, the bits of the small sort are shifted up.
  3. Flip the signal bit as a result of the worth was transformed to unsigned.

With numbers of various sizes (SMALLINT, INT, BIGINT, and so on.) we retailer them on the smallest bit vary that they match into, from 32, 64, and 128 bit ranges. That implies that after we are changing the values into a typical illustration, we first should shift them by the distinction of the variety of their bits (second step). Our goal illustration is unsigned integer, due to this fact we will even should flip the primary bit accordingly (third step).

We deal with all the opposite impala easy knowledge sorts as follows:

  • In case of floats, we should take into account getting completely different NaN values, these circumstances will probably be dealt with as zero. Floating detrimental values are represented otherwise, in these circumstances, all bits should be flipped (in distinction to the third step for integers).
  • Date and timestamp sorts even have their inside numeric illustration, which we will work with after the above conversions. 
  • Variable size strings and chars even have their integer illustration, the place we extract the bits primarily based on the string’s size and fill the tip with zeros. 
  • Lastly, we deal with null values as unsigned zero.

Now we’ve lined all Impala easy sorts, which means we will harvest the alternatives from Z-ordering not just for integers, however for all easy sorts.


On this article, we launched an ordering that preserves locality, permitting us to vastly enhance pace up of selective queries not solely on the primary sorted column, but additionally on all of the sorting columns, exhibiting solely minor variations by way of efficiency when filtering completely different columns. Utilizing Z-ordering in Impala gives large alternative when all of the columns are (nearly) equally often queried and have related properties, like in telecommunications or IoT workloads. Z-order is on the market in upstream Impala from model 4.0. In Cloudera releases, it’s out there from CDH 7.2.8.



Please enter your comment!
Please enter your name here

Most Popular

Recent Comments