Batch Codes from Hamming and Reed-M\"uller Codes

Batch codes, introduced by Ishai et al. encode a string $x \in \Sigma^{k}$ into an $m$-tuple of strings, called buckets. In this paper we consider multiset batch codes wherein a set of $t$-users wish to access one bit of information each from the original string. We introduce a concept of optimal batch codes. We first show that binary Hamming codes are optimal batch codes. The main body of this work provides batch properties of Reed-M\"uller codes. We look at locality and availability properties of first order Reed-M\"uller codes over any finite field. We then show that binary first order Reed-M\"uller codes are optimal batch codes when the number of users is 4 and generalize our study to the family of binary Reed-M\"uller codes which have order less than half their length.


Introduction
Consider the situation where a certain amount of data, such as information to be downloaded, is distributed over a number of devices. We could have multiple users who wish to download this data. In order to reduce wait time, we look at locally repairable codes with availability as noted in [4]. A locally repairable code, with locality r and availability δ, provides us the opportunity to reconstruct a particular bit of data using δ disjoint sets of size at most r [11]. When we want to reconstruct a certain bit of information, this is the same as a Private Information Retrieval (PIR) code. However, we wish to examine a scenario where we reconstruct not necessarily distinct bits of information.
A possible answer to the more complex scenario seems to be a family of codes called batch codes, introduced by Ishai et al. in [6]. Batch codes were originally studied as a schematic for distributing data across multiple devices and minimizing the load on each device and total amount of storage. We study (n, k, t, m, τ ) batch codes, where n is the code length, k is the dimension of the code, t is the number of bits we wish to retrieve, m is the number of buckets, and τ is the maximum number of bits used from each bucket for any reconstruction of t bits. In this paper we seek to minimize the number of devices in the system and the load on each device while maximizing the amount of reconstructed data. In other words, we want to minimize mτ while maximizing t.
In Section 2, we formally introduce batch codes and summarize results from previous work on batch codes. We then introduce the concepts of locality and availability of a code. We conclude the section by introducing a concept of optimal batch codes.
After the background, we study the batch properties of binary Hamming codes and Reed-Müller codes. Section 3 focuses on batch properties of binary Hamming codes. We show that Hamming codes are optimal (2 s−1 , 2 s − 1 − s, 2, m, τ ) batch codes for m, τ ∈ N such that mτ = 2 s−1 .
Section 4 is the main body of this work and provides batch properties of Reed-Müller codes. We first study the induced batch properties of a code C given that C ⊥ is of a (u | u+v)-code construction with determined batch properties. In Section 4.1 we study the locality and availability properties of first order Reed-Müller codes over any finite field. We find that the locality of RM q (1, µ) is 2 when q = 2 and 3 when the q = 2. Furthermore, we also show that its availability is q µ −1 2 when q = 2, whereas when q = 2, the availability is 2 µ −1 3 if µ is even and at least 2 µ −4 4 otherwise. In Section 4.2 we show that binary first order Reed-Müller codes are optimal batch codes for t = 4. We first look at the specific RM(1, 4) case and achieve parameters (16, 5, 4, m, τ ) such that mτ = 10. We then prove a general result that any Reed-Müller code with ρ = 1 and µ ≥ 4 has batch properties (2 µ , µ + 1, 4, m, τ ) for any m, τ ∈ N such that mτ = 10.

Background
In 2004, Ishai et al. [6] introduced the following definition of batch codes: Definition 2.1. An (n, k, t, m, τ ) batch code over an alphabet Σ encodes a string x ∈ Σ k into an m-tuple of strings, called buckets, of total length n such that for each t-tuple of distinct indices, i 1 , . . . , i t ∈ [k], the entries x i 1 , . . . , x it can be decoded by reading at most τ symbols from each bucket.
We can view the buckets as servers and the symbols used from each bucket as the maximal load on each server. In the above scenario, a single user is trying to reconstruct t bits of information. This definition naturally leads to the concept of multiset batch codes which have nearly the same definition as above, but the indices i 1 , . . . , i k ∈ [k] are not necessarily distinct. This means we have t users who each wish to reconstruct a single element. This definition in turn relates to private information retrieval (PIR) codes which are similar to batch codes but instead look to reconstruct the same bit of information t times. Another notable type of batch code defined in [6] is a primitive multiset batch code where the number of buckets is m = n.
3. An (n, k, t, m, 1) batch code implies an (n, k, t, ⌈ m τ ⌉, τ ) batch code. Much of the related research involves primitive multiset batch codes with a systematic generator matrix. In [6], the authors give results for some multiset batch codes using subcube codes and Reed-Müller codes. They use a systematic generator matrix, which often allows for better parameters. Their goal was to maximize the efficiency of the code for a fixed number of queries t. The focus of research on batch codes then shifted to combinatorial batch codes. These were first introduced by [9]. They are replication based codes using various combinatorial objects that allow for efficient decoding procedures. We do not consider combinatorial batch codes but some relevant results can be found in [9], [2], [3], and [10].
Next, the focus of research turned to linear batch codes, which use classical error-correcting codes. The following useful results are proven in [7]: Theorem 2.4. Let C be an [n, k, t, n, 1] linear batch code over F 2 with generator G. Then, G is a generator matrix of the classical error-correcting [n, k, d] 2 linear code where d ≥ t.
Because of the vast amount of information on classical error-correcting codes, we use these as our central focus in this paper. As is often the case, studying the properties of the dual codes helps us find efficient batch codes.
Next, [12] considers restricting the size of reconstruction sets. These are similar to codes with locality and availability: Definition 2.7. A code C has locality r ≥ 1 if for any c ∈ C, any entry in c can be recovered by using at most r other entries of c.
Definition 2.8. A code C with locality r ≥ 1 has availability δ ≥ 1 if for any c ∈ C, any entry of c can be recovered by using one of δ disjoint sets of at most r other entries of y In [12], linear multiset batch codes with restricted reconstruction sets were presented. The restriction on the size of reconstruction sets can be viewed as trying to minimize total data distribution. We restrict the size of our reconstruction sets to the locality of the code. By using this restriction, we find multiset batch codes with optimal data distribution. For cyclic codes, the locality can be derived from the following in [5]: Lemma 2.9. Let C be an [n, k, d] cyclic code, and let d ′ be the minimum distance of its dual code C ⊥ . Then, the code C has all symbol locality d ′ − 1.
This relies on each entry being in the support of a minimal weight dual code word. We generalize this lemma to the following one. Lemma 2.10. Let C ⊆ F n q be a linear code and let d ′ be the minimum distance of C ⊥ . If C ⊥ is generated by its minimum weight codewords and then C has all symbol locality d ′ − 1.
Proof. Condition (1) implies that no coordinate of C is independent on the others. If the minimum weight codewords generate C ⊥ , then each coordinate of C is in the support of at least one minimum weight codeword of C ⊥ . This implies the all symbol locality d ′ − 1 of C.
Condition (1) is a reasonable condition for a code. Without it the code C would have non recoverable coordinates.
We give here a bound that relates the locality property of a linear code with its batch properties.
Lemma 2.11. Let C be an [n, k, t, m, τ ] linear batch code with minimal locality r. Then it holds that Proof. We consider such a code C. If each entry has at least one reconstruction set with fewer than r elements, then by the definition of locality, C has locality less than r, a contradiction to r being the minimal locality. Therefore, there exists at least one entry for which all recovery sets are of size at least r. If we wish to recover this entry t times, then we may read the entry itself and then make use of t − 1 disjoint recovery sets, each of size r. This implies reading at least (t − 1)r + 1 entries, and since we may read only τ entries from each of the m buckets, we must have that mτ ≥ (t − 1)r + 1.
From the perspective of individual devices storing bits of data, mτ represents the total amount of data read to provide t pieces of the original data. To minimize bandwidth usage in the case where the entries of a codeword represent nodes on a network, we must minimize mτ . Definition 2.12. A [n, k, t, m, τ ] linear batch code C with minimal locality r is optimal if it satisfies Condition (2) with equality.
We show now that Hamming codes are optimal linear batch codes.

Hamming Codes
Hamming codes were first introduced in 1950 by Richard Hamming. In what follows, we consider binary Hamming odes over F 2 . The parameters of binary Hamming codes are shown in [8].
be a matrix whose columns are all of the nonzero vectors of F s 2 . Let n = 2 s − 1. We use H as our parity check matrix and define the binary Hamming Code: It is well-known that H s is a [2 s − 1, 2 s − 1 − s, 3] cyclic code. Its dual code, the simplex code, is a [2 s − 1, s, 2 s−1 ] cyclic code. Thus, by Lemmma 2.9, the locality of H s is 2 s−1 − 1. We now present the batch properties of binary Hamming Codes.
Hamming codes are optimal linear batch codes.
The buckets for m = 2 s−1 , τ = 1 are constructed as follows. Let H be the parity check matrix of a binary Hamming code, H, with columns h j ∈ F s 2 for 1 ≤ j ≤ n. If h a + h b = 1 (the all ones column), then we place a and b into the same bucket. Note that because h ℓ = 1 in H, ℓ is placed into its own bucket. Let r d ∈ F n 2 be the rows in H such that 1 ≤ d ≤ (n − k) = s. For any c ∈ H, r d · c T = 0, and thus i∈supp(r d ) c i = 0. As a result of this construction, if a, b ∈ supp(r d ), then entry d of h a + h b is 0, so a and b cannot be in the same bucket. Therefore, i∈supp(r d ) c i = 0 only involves bits in separate buckets. Hence, any bit in a codeword can be written as a linear combination of bits in separate buckets. Now, we show how to reconstruct any two bits of information.
• Case 1: If a and b are in separate buckets, then use c a and c b . • Case 2: Suppose a and b are in the same bucket. We can take c a itself. To construct c b , we choose an r d such that b ∈ supp(r d ). Then, we can write c b = i∈supp(r d )\{b} c i , which we know only contains bits in disjoint buckets.
Every bucket has cardinality 2 aside from the bucket corresponding to the all ones column in H, so this construction gives us exactly 2 s−1 buckets. Thus, we have shown that the batch properties hold for m = 2 s−1 and τ = 1. Further, Lemma 2.3 implies that this is true for any m, τ such that mτ = 2 s−1 .
Note that the locality of H is 2 s−1 − 1, and therefore, t = 2 is also maximal. Suppose instead that we could have t ≥ 3. Then in particular, each entry must be reconstructible at least 3 times. We may take the entry itself, but then there must be at least 2 other reconstruction sets used which are disjoint and of size 2 s−1 − 1. These would correspond to two codewords in the dual code of weight 2 s−1 with the intersection of their support being only the given entry. The sum of these codewords will thus have weight 2 s−1 + 2 s−1 − 2 = 2 s − 2. However, the all ones vector is also in the dual code. Adding this vector to the sum will produce a codeword of weight one, a contradiction. Thus, t = 2 is maximal. We note that for general batch codes we are only interested in reconstructing information bits. However, we are able to obtain any pair of bits in the codeword. Additionally, we note that although mτ is optimized, we wish to find batch codes where t > 2. We desire a larger value as we are concerned with practical applications, and the goal is to quickly distribute data. Thus we move on to Reed-Müller codes, where we are able to obtain larger t values.

Reed-Müller Codes
Reed-Müller codes are well known linear codes. We give some basic properties of these codes but an interested reader can find more information in [1].
. , X µ ] be the ring of polynomials in µ variables with coefficients in F q and let F µ q = {P 1 , . . . , P n } (so n = q µ ). The q-ary Reed-Müller code, RM q (ρ, µ) is defined as: where F q [X 1 , . . . , X µ ] ρ is the set of all multivariate polynomials over F q of total degree at most ρ.
In the binary case, Reed-Müller codes can be equivalently defined using the (u | u + v)-code construction. For completeness, we first give a description of the (u | u + v)-code construction and the related generator matrix, which can be found in [8].
Definition 4.2. Given two linear codes C 1 , C 2 with identical alphabets and block lengths, we construct a new code C as follows: Let G, G 1 , and G 2 be the generator matrices for the codes C, C 1 , and C 2 , respectively, where C is obtained from C 1 and C 2 via the (u | u + v)-construction. Then we have From this, we have the following proposition.
Later, our focus will be on q = 2, so when referring to RM 2 (ρ, µ) we omit the 2 for convenience. We obtain the following equivalent definition of a binary Reed-Müller code.
As a consequence if G ρ,µ is the generator matrix of the code RM(ρ, µ), then We now examine the batch properties of the (u | v + u)-code construction. The first notable result comes from codes that are contained in other codes: Theorem 4.5. Let C 1 , C 2 be codes of length n and dimension k 1 and k 2 , respectively such that C 2 ⊆ C 1 . If C 1 is a (n, k 1 , t, m, τ ) batch code, then C 2 is a (n, k 2 , t, m, τ ) batch code.
Proof. Note that C ⊥ 2 ⊆ C ⊥ 1 because C 1 ⊆ C 2 . Therefore the same parity check equations for C 2 apply to C 1 . Thus, to recover information in C 1 , we may use the same parity check equations we would in C 2 , which implies C 1 is at least a (n, k 1 , t, m, τ ) batch code.
We now introduce results for a (u | u + v)-code construction. Theorem 4.6. Let n, k 1 , k 2 ∈ N such that n ≥ k 1 ≥ k 2 , and let C be a [n, k 1 + k 2 ] linear code. Then let C ⊥ be a (u | u + v)-code construction of C ⊥ 1 and C ⊥ 2 , where • C 1 ⊥ is an [n, n − k 1 ] linear code and C 2 ⊥ is an [n, n − k 2 ] linear code, and a (n, k, t, mτ ) batch code, then C is a (2n, k 1 + k 2 , t, m, τ ) batch code.
Proof. The first two parameters of C follow from the definition of a (u | u + v) construction. Let C be constructed as described, and let C 2 be an (n, k, t, m, τ ) batch code. Then consider any t-tuple of indices i 1 , . . . , i t ∈ [2n] and let i ′ j = i j if i j ∈ [n] and i ′ j = i j − n otherwise. Then we know that i ′ 1 , . . . , i ′ t ∈ [n], and so by the batch properties of C 2 there exist t disjoint recovery sets for the entries in these indices, the union of which consists of at most τ entries in each of the m buckets.
If for all i ∈ {n + 1, . . . , 2n}, we place i in the same bucket as i − n, then this results in m buckets for C. If we consider the recovery sets from above, if R i ′ j is the recovery set for i ′ j , and i j ∈ [n], then we claim that R ′ j is a recovery set for i j in C. This is because the recovery set comes from a vector v ∈ C ⊥ 2 , and by construction, since C ⊥ 2 ⊆ C ⊥ 1 , we have that (v|0), (0|v) ∈ C ⊥ . Likewise, if i j / ∈ [n], then by using (0|v), we find that R ′ , and so we have t disjoint recovery sets, the union of which consists of at most τ elements from each of m buckets, and so C is a (2n, k 1 + k 2 , t, m, τ ) batch code.
Because binary Reed-Müller codes have parity check matrices that satisfy the above properties, we turn to that family of codes.

4.1.
Locality and availability properties of RM q (1, µ). Reed-Müller codes for which ρ = 1 are known as first order Reed-Müller codes. We look at the properties using the polynomial evaluation definition of Reed-Müller codes. We begin with a result in the q-ary case. Proof. First, if (λ 1 , . . . , λ n ) is in the dual code, then by definition, for every polynomial f ∈ F q [x 1 , . . . , x µ ] 1 . In particular, note that for any 1 ≤ k ≤ µ, from where p i,k is the kth entry of point P i . We may gather these equations together for 1 ≤ k ≤ µ to write the linear combination n i=1 λ i P i = 0.
We then consider f 0 = 1, and Equation (4) becomes n i=1 λ i = 0, and so we have this direction. For the other direction, assume that n i=1 λ i P i = 0 and n i=1 λ i = 0.
From Theorem 4.7 we obtain the following corollaries: Proof. Let q = 2 and suppose by way of contradiction that the minimum weight is 2. Then there exist two distinct points that sum to zero. This is not possible, and thus the minimum weight must be greater than 2. Note that the only choice of λ i is 1, and thus the sum is 0 if and only if there are an even number of points. Therefore, the weight of the codewords is a multiple of 2, and thus the minimum weight is not 3. The following points are in P (for µ ≥ 2): P 0 = (0, 0, 0, . . . , 0) T , P 1 = (1, 0, 0, . . . , 0) T , P 2 = (0, 1, 0, . . . , 0) T , and P 3 = P 1 + P 2 .
These points satisfy the conditions, and thus the minimum distance for characteristic 2 is 4.
Upon rearrangement, we have that P a +αP b +(−α−1)P c = 0. Furthermore, we find that P c = P a , P b . If P c = P a , then our equation becomes P a +αP b +(−α−1)P a = 0, which simplifies to αP b −αP a = 0, which would contradict P b = P a . Likewise, P c = P b would imply P a + αP b + (−α − 1)P b = 0, which becomes P a − P b = 0, another contradiction. A simple counting argument tells us that there are choices of pairs P b , P c for P a , and each of these corresponds to a unique λ ∈ RM q (1, µ) ⊥ of weight 3 that can be used to recover c a , the supports of which intersect only in {a}. Thus, the locality is 2 and the availability is q µ −1 2 . As proven in [1] the minimum-weight codewords are generators of a Reed-Müller code RM p s (µ, ρ) where p is a prime number and 0 ≤ ρ ≤ µ(p s − 1) if and only if either m = 1 or µ = 1 or ρ < p or ρ > (µ − 1)(p s − 1) + p s−1 − 2. Together with Lemma 2.10 it follows that these Reed-Müller codes have all symbol availability. Thus, in the following theorems, we only consider the availability of P 1 as this implies all symbol availability. Proof. An inductive argument on µ satisfies this claim. We look at the sum of evaluation points to prove our claim. We are looking for 2 µ −1 3 disjoint sets of three points of F µ q that sum to (0, . . . , 0) T ∈ F µ q . It is easy to verify the claim for µ = 2 since there is only one equation for which this is true Now assume the claim is true for µ = 2k, we show that it is true also for µ = 2k+2. We have 2 2k −1 3 disjoint sets of three points that all sum toP 1 = (0, . . . , 0) T ∈ F 2k q . Let P 1 = (0, . . . , 0) T ∈ F 2k+2 q . For any choice of set of points {S 1 , S 2 , S 3 } ⊂ F 2k q that sum toP 1 in F 2k q it holds that Additionally it also holds that The four equations in (5) all use distinct sets of points because S 1 , S 2 , and S 3 are distinct. Now, there are 2 2k −1 3 sets of distinct points like S 1 , S 2 , and S 3 . Thus, we have a total of Note that the extra one comes from Equation (6). Also note that our total is an integer as Because of this, every single coordinate is used, and thus we have maximal availability. when µ is odd.
No combination of the four remaining points of F 3 2 sum toP 1 . Here we have availability 1 = 2 3 −4 4 , and so we have our base case. Now assume that, for µ = 2k + 1, where k ≥ 1, we have that the availability of RM(1, µ) is at least 2 µ −4 4 , and there are at least 3 points that are not used in any recovery set for P 1 . Let S 1 , S 2 , and S 3 be any three points in a recovery set of P 1 ∈ F µ 2 . Then for µ + 2, the disjoint sets of three points that sum toP 1 = (0, . . . , 0) ∈ F µ+5 2 can be defined by the equations: This results in at least 4 2 µ −4 4 = 2 µ − 4 possible recovery sets. However, we also have points T 1 , T 2 , and T 3 that are not used in F µ 2 , and so we may also define the following equations: Adding these, we have availability of at least 2 µ − 4 + 3 = 2 µ − 1 = 2 µ+2 −4

4
Thus we have achieved a lower bound on δ. Note, however, that we have not shown that this is necessarily an optimal construction. We now study the batch properties of Reed-Müller codes.  Proof. First, note that the dual code of RM(1, 4) is RM (2,4), which informs us how to reconstruct elements of the codewords. The generator matrix for RM (1,4) can be recursively constructed as follows: It can be verified that any query of 4 coordinates of a codeword in RM(1, 4) is possible with the following partition into buckets: In the above case, m = 10 and τ = 1. By Lemma 2.3, this holds for any m, τ ∈ N such that mτ = 10.
We now show how to extend this construction to RM(1, µ) for any µ ≥ 4.
Proof. We will proceed by induction. First, we have just shown this for the base case where µ = 4. Now, assume that for some µ−1 ≥ 4, we have that RM(1, µ−1) has batch properties (n, k, 4, m, τ ). Recall that the dual code of C = RM(1, µ) is C ⊥ = RM(µ − 2, µ). Then as Reed-Müller codes follow the This means we may apply Theorem 4.6. Since C 2 = RM(1, µ − 1), we know that C is also a (n, k, 4, m, τ ) batch code. By induction, this is now true for any µ ≥ 4.
This provides a way to produce parity check equations for RM(ρ + 1, 2ρ + 4) from those for RM(ρ, 2ρ + 1), which in turn provides a way to make recovery sets for the former from those for the latter, as each vector corresponds to a recovery set for every index in its support.
. By the induction hypothesis, we have recovery sets R ′ 1 , R ′ 2 , R ′ 3 , R ′ 4 ⊆ [n] for these indices in RM(ρ, 2ρ + 2). These recovery sets are sets such that consists of at most 1 index in each bucket Each R ′ k is either {i ′ k } or the support of some vector a ∈ RM(ρ + 1, 2ρ + 2). If |R ′ k | = 1, then let . By Lemma 4.15, we know that if s ′ k = s k , R k is the support of some vector in RM(ρ + 2, 2ρ + 4), and so this is a valid recovery set.
We must now show that these recovery sets have the desired properties given the correct choice of s ′ k values. We now note that since indices are being recreated from d = |{s 1 , s 2 , s 3 , s 4 }| different quarters of [4n], we can take at least d of the recovery sets to be singletons. Further, assume that we take as many recovery sets to be singletons as possible. In particular, this means that no recovery set will contain more than one index in each bucket. This is because the only way R k could contain two indices in a bucket would be if R ′ k did. Since R ′ k is a proper recovery set, it could only contain two indices in one bucket if one of those indices was i ′ k . Then that bucket is not used in any other recovery set, and so we could instead take R ′ k = {i ′ k }, as per our assumption. This leaves at most 4 − d recovery sets which are not singletons and require a subset in a second quarter. Assume without loss of generality that these are R 1 , . . . , R d−4 . Then we may let s ′ 1 , . . . , s ′ d−4 be the elements of {0, 1, 2, 3} \ {s 1 , s 2 , s 3 , s 4 }. Since these are distinct, the only way some R k \ {i k } and R j \ {i j } could have a nonempty intersection would be if condition 1 was violated. Thus, condition 1 also holds for the R k . We have already covered the fact that none of the R ′ k + s ′ k n will not contain more than one index in each bucket, and since these are in separate quarters, the only way 4 k=1 (R k \ {i k }) would contain more than one index in a bucket would be if some elements being recovered in the same quarter had (R ′ k \ {i ′ k }) ∪ (R ′ j \ {i j }) consisting of more than 1 index in some bucket. This would violate condition 2, and so we know that the R k s also satisfy 2.

Conclusion
This work focuses on batch properties of binary Hamming and Reed-Müller code. The high locality of binary Hamming codes implies their availability to be at most 1. Binary Hamming codes can be viewed as linear batch codes retrieving queries of at most 2 indices, the trivial case. Nonetheless, we prove that for t = 2, binary Hamming codes are actually optimal (2 s−1 , 2 s − 1 − s, 2, m, τ ) batch codes for m, τ ∈ N such that mτ = 2 s−1 .
We turn to binary Reed-Müller codes for optimal batch codes that allow larger queries, meaning t-tuples with t larger than 2. This research direction is motivated by the large availability of first order Reed-Müller codes as showed in the paper. We prove the optimality of first order Reed-Müller codes for t = 4. Finally we generalize our study to Reed-Müller codes RM(ρ, µ) which have order less than half their length by proving that they have batch properties (2 µ , k, 4, m, τ ) such that mτ = 10 · 2 2ρ−2 for RM(ρ ′ , µ) where µ ∈ {2ρ + 2, 2ρ + 3} and ρ ′ ≤ ρ.