Two important similarity measures between sequences are the longest common subsequence (LCS) and the dynamic time warping distance (DTWD). The computations of these measures for two given sequences are central tasks in a variety of applications. Simple dynamic programming algorithms solve these tasks in time, and despite an extensive amount of research, no algorithms with significantly better worst case upper bounds are known.
In this paper, we show that an time algorithm, for some , for computing the LCS or the DTWD of two sequences of length over a constant size alphabet, refutes the popular Strong Exponential Time Hypothesis (SETH). Moreover, we show that computing the LCS of strings over an alphabet of size cannot be done in time, for any , under SETH. Finally, we also address the time complexity of approximating the DTWD of two strings in truly subquadratic time.
1 Introduction
In many applications it is desirable to determine the similarity of two or more sequences of letters. The sequences could be English text, computer viruses, pointwise descriptions of points in the plane, or even proteins or DNA sequences. Because of the large variety of applications, there are many notions of sequence similarity. Some of the most important notions are the Longest Common Subsequence (LCS), the EditDistance, the Dynamic Time Warping Distance (DTWD) and the Frechet distance measures. Considerable algorithmic research has gone into developing techniques to compute these measures of similarity. Unfortunately, even when the input consists of two strings, the time complexity of the problems is not well understood. There are classical algorithms that compute each of these measures in time that is roughly quadratic in the length of the strings, and this quadratic runtime is essentially the best known. A common technique to explain this quadratic bottleneck is to reduce the so called SUM problem to the problems at hand. This approach has enjoyed a tremendous amount of success [GO12]. Nevertheless, there are no known reductions from SUM to the above four sequence similarity problems. Two recent papers [Bri14, BI14] explained the quadratic bottleneck for Frechet distance and EditDistance by a reduction from CNFSAT, thus showing that any polynomial improvement over the quadratic running time for these two problems would imply a breakthrough in SAT algorithms (refuting the Strong Exponential Time Hypothesis (SETH) that we define below). A natural question is, can the same hypothesis explain the quadratic bottleneck for other sequence similarity measures such as DTWD and LCS? This paper answers this question in the affirmative, providing conditional lower bounds based on SETH for LCS and DTWD, along with other interesting results.
Lcs.
Given two strings of symbols over some alphabet , the LCS problem asks to compute the length of the longest sequence that appears as a subsequence in both input strings. It is a very basic problem that we encounter in undergraduatelevel computer science courses, with a classic dynamic programming algorithm [CLRS09]. LCS attracted an extensive amount of research, both for its mathematical simplicity and for its large number of important applications, including data comparison programs and bioinformatics. In many of these applications, the size of makes the quadratic time algorithm impractical. Despite a long list of improved algorithms for LCS and its variants in many different settings, e.g. [Hir75, HS77] (see [BHR00] for a survey), the best algorithms on arbitrary strings are only slightly subquadratic and have an running time [MP80] if the alphabet size is constant, and otherwise [BFC08, Gra14].
Dtwd.
Given two sequences of points and , the dynamic time warping distance between them is defined as the minimum, over all monotone traversals of and , of the sum over the stages of the traversal of the distance between the corresponding points at that stage (see the preliminaries for a formal definition). When defined over symbols, the distance between two symbols is simply if they are equal and otherwise. The DTWD problem asks to compute the score of the optimal traversal of two given sequences. Note that if instead of taking the sum over all the stages of the traversal, we only take the maximum distance, we get the discrete Frechet distance between the sequences, a well known measure from computational geometry.
DTWD is an extremely useful similarity measure between temporal sequences which may vary in time or speed, and has long been used in speech recognition and more recently in countless data mining applications. A simple dynamic programming algorithm solves DTWD in time and is the best known in terms of worstcase running time, while many heuristics were designed in order to obtain faster runtimes in practice (see Wang et al. for a survey [WDT10]).
Hardness assumption.
The Strong Exponential Time Hypothesis (SETH) [IP01, IPZ01] asserts that for any there is an integer such that SAT cannot be solved in time. Recently, SETH has been shown to imply many interesting lower bounds for polynomial time solvable problems [PW10, RV13, AV14, AVW14, Bri14, BI14]. We will base our results on the following conjecture, which is possibly more plausible than SETH: it is known to be implied by SETH, yet might still be true even if SETH turns out to be false. See Section 2.2 for a discussion.
Conjecture 1.
Given two sets of vectors in and an integer , there is no and an algorithm that can decide if there is a pair of vectors such that , in time.
Previous work.
Out of the many recent SETHbased hardness results, most relevant to our work are the following three results concerning sequence similarity measures.
Abboud, Vassilevska Williams and Weimann [AVW14] proved that a truly subquadratic algorithm^{2}^{2}2A truly (or strongly) subquadratic algorithm is an algorithm with running time, for some . for alignment problems like Local Alignment and LocalLCS refutes SETH. However, the “locality” of those measures was heavily used in the reductions, and the results did not imply any barrier for “global” measures like LCS.
Bringmann [Bri14] proved a similar lower bound for the computation of the Frechet distance problem. As mentioned earlier, DTWD is equivalent to Frechet if we replace the “max” with a “sum”.
Most recently, Backurs and Indyk [BI14] proved a similar quadratic lower bound for EditDistance. LCS and EditDistance are closely related. A simple observation is that the computation of the LCS is equivalent to the computation of the EditDistance when only deletions and insertions are allowed, but no substitutions. Thus, intuitively, LCS seems like an easier version of EditDistance, since it a solution has fewer degrees of freedom, and the lower bound for EditDistance does not immediately imply any hardness for LCS.
1.1 Our results
Our main result is to show that a truly subquadratic algorithm for LCS or DTWD refutes Conjecture 1 (and SETH), and should therefore be considered beyond the reach of current algorithmic techniques, if not impossible. Our results justify the use of subquadratic time heuristics and approximations in practice, and add two important problems to the list of SETHhard problems!
Theorem 1.
If there is an such that either

LCS over an alphabet of size can be computed in time, or

DTWD over symbols from an alphabet of size can be computed in time,
then Conjecture 1 is false.
We note that the nonexistence of algorithm for DTWD between two sequences of symbols over an alphabet of size implies that there is no time algorithm for DTWD between two sequences of points from (dimensional Euclidean space). This follows because we can choose points in dimensional Euclidean space so that any two points are at distance from each other, i.e., choose the vertices of a regular simplex.
Next, we consider the problem of computing the LCS of strings, which also is of great theoretical and practical interest. A simple dynamic programming algorithm solves LCS in time, and the problem is known to be NPhard in general, even when the strings are binary [Mai78]. When is a parameter, the problem is hard, even over a fixed size alphabet, by a reduction from Clique [Pie03]. The parameters of the reduction imply that an algorithm for LCS would refute ETH ^{3}^{3}3The exponential time hypothesis (ETH) is a weaker version of SETH: it asserts that there is some such that SAT on variables requires time., and an algorithm with running time sufficiently faster than would imply a new algorithm for Clique. However, no results ruling out or even upper bounds were known.
In this work, we prove that even a slight improvement over the dynamic programming algorithm is not possible under SETH when the alphabet is of size .
Theorem 2.
If there is a constant , an integer , and an algorithm that can solve LCS on strings of length over an alphabet of size in time, then SETH is false.
A main question we leave open is whether the same lower bound holds when the alphabet size is a constant independent of . In Section 6 we prove Theorem 2 and make a step towards resolving the latter question by proving that a problem we call LocalLCS has such a tight lower bound under Conjecture 1 even when the alphabet size is .
Finally, we consider the possibility of truly subquadratic algorithms for approximating these similarity measures. The LCS and EditDistance reductions do not imply any nontrivial hardness of approximation. For Frechet in dimensional Euclidean space, Bringmann [Bri14] was able to rule out truly subquadratic approximation algorithms. Here, we show that Bringmann’s construction implies approximation hardness for DTWD and Frechet when the distance function between points is arbitrary, and is not required to satisfy the triangle inequality. The details are presented in Section 5.
1.2 Technical contribution
Our reductions build up on ideas from previous SETHbased hardness results for sequence alignment problems, and are most similar to the EditDistance reduction of [BI14], with several new ideas in the constructions and the analysis. As in previous reductions, we will need two kinds of gadgets: the vector or assignment gadgets, and the selection gadgets. Two vector gadgets will be “similar” iff the two vectors satisfy the property we are interested in (we want to find a pair of vectors that together satisfy some certain property). The selection gadget construction will make sure that the existence of a pair of “similar” vectorgadgets (i.e., the existence of a pair of vectors with the property), determines the overall similarity between the sequences. That is, if there is a pair of vectors satisfying the property, the sequences are more “similar” than if there is non. Typically, the vectorgadgets are easier to analyze, while the selectiongadgets might require very careful arguments.
There are multiple challenges in constructing and analyzing a reduction to LCS. Our first main contribution was to prove a reduction from a weighted version of LCS (WLCS), in which different letters are more valuable than others in the optimal solution, to LCS. Reducing problems to WLCS is a significantly easier and cleaner task than reducing to LCS. Our second main contribution was in the analysis of the selection gadgets. The approach of [BI14] to analyze the selection gadgets involved a caseanalysis which would have been extremely tedious if applied to LCS. Instead, we use an inductive argument which decreases the number of cases significantly.
One way to show hardness of DTWD would be to show a reduction from EditDistance. However, we were not able to show such a reduction in general. Instead, we construct a mapping with the following property. Given the hard instance of EditDistance, that were constructed in [BI14], consisting of two sequences and , we have that . This requires carefully checking that this equality holds for particularly structures sequences.
2 Preliminaries
For an integer , stands for .
2.1 Formal definitions of the similarity measures
Definition 1 (Longest Common Subsequence).
For two sequences and of length over an alphabet , the longest sequence that appears in both as a subsequence is the longest common subsequence (LCS) of and we say that . The Longest Common Subsequence problem asks to output .
Definition 2 (Dynamic time warping distance).
For two sequences and of points from a set and a distance function , the dynamic time warping distance, denoted by , is the minimum cost of a (monotone) traversal of and .
A traversal of the two sequences has the following form: We have two markers. Initially, one is located at the beginning of , and the other is located at the beginning of . At every step, one or both of the markers simultaneously move one point forward in their corresponding sequences. At the end, both markers must be located at the last point of their corresponding sequence.
To determine the cost of a traversal, we consider all the steps of the traversal, and add up the following quantities to the final cost. Let the configuration of a step be the pair of symbols and that the first and second markers are pointing at, respectively, then the contribution of this step to the final cost is .
The DTWD problems asks to output .
In particular, we will be interested in the following special case of DTWD.
Definition 3 (Dtwd over symbols).
The DTWD problem over sequences of symbols, is the special case of DTWD in which the points come from an alphabet and the distance function is such that for any two symbols , if and otherwise.
Besides LCS and DTWD which are central to this work, the following two important measures will be referred to in multiple places in the paper.
Definition 4 (EditDistance).
For any two sequences and over an alphabet , the edit distance is equal to the minimum number of symbol insertions, symbol deletions or symbol substitutions needed to transform into . The EditDistance problem asks to output for two given sequences .
Definition 5 (The discrete Frechet distance).
The definition of the Frechet distance between two sequences of points is equivalent to the definition of the DTWD with the following difference. Instead of defining the cost of a traversal to be the sum of for all the configurations of points and from the traversal, we define it to be the maximum such distance . The Frechet problem asks to compute the minimum achievable cost of a traversal of two given sequences.
2.2 Satisfiability and Orthogonal Vectors
To prove hardness based on Conjecture 1 and therefore SETH, we will show reductions from the following vectorfinding problems.
Definition 6 (Orthogonal Vectors).
Given two lists and of vectors , is there a pair that is orthogonal, ?
This problem is known under many names and equivalent formulations, e.g. Batched Partial Match, Disjoint Pair, and Orthogonal Pair. Starting with the reduction of Williams [Wil05], this problem or variants of it have been used in every hardness result for a problem in P that is based on SETH, via the following theorem.
Theorem 3 (Williams [Wil05]).
If for some , Orthogonal Vectors on vectors in for can be solved in time, then CNFSAT on variables and clauses can be solved in time, and SETH is false.
The proof of this theorem is via the splitandlist technique and will follow from the proof of Lemma 1 below. The following is a more general version of the Orthogonal Vectors problem.
Definition 7 (MostOrthogonal Vectors).
Given two lists and of vectors and an integer , is there a pair that has inner product at most , ? We call any two vectors that satisfy this condition ()far, and ()close vectors otherwise.
Clearly, an algorithm for MostOrthogonal Vectors on dimensions implies a similar algorithm for Orthogonal Vectors, while the other direction might not be true. In fact, while faster, mildly subquadratic algorithms are known for Orthogonal Vectors when is polylogarithmic, with running times [CIP02, ILPS14, AWY15], we are not aware of any such algorithms for MostOrthogonal Vectors.
Lemma 1 below shows that such algorithms would imply new algorithms for MAXCNFSAT on a polynomial number of clauses. While such upper bounds are known for CNFSAT [AWY15, DH09], to our knowledge, upper bounds are known for MAXCNFSAT only when the number of clauses is linear in the number of variables [DW, CK04]. Together with the fact that the reductions from MostOrthogonal Vectors to LCS, DTWD and EditDistance incur only a polylogarithmic overhead, this implies that shaving a superpolylogarithmic factor over the quadratic running times for these problems might be difficult. The possibility of such improvements for pattern matching problems like EditDistance was recently suggested by Williams [Wil14], as another potential application of his breakthrough technique for AllPairsShortestPaths.
More importantly, Lemma 1 shows that refuting Conjecture 1 implies an algorithm for MAXCNFSAT and therefore refutes SETH.
Lemma 1.
If MostOrthogonal Vectors on vectors in can be solved in time, then given a CNF formula on variables and clauses, we can compute the maximum number of satisfiable clauses (MAXCNFSAT), in time.
Proof.
Given a CNF formula on variables and clauses, split the variables into two sets of size and list all partial assignments to each set. Define a vector for each partial assignment which contains a at coordinate if sets any of the literals of the clause of the formula to true, and otherwise. In other words, it contains a if the partial assignment satisfies the clause and otherwise. Now, observe that if are a pair of partial assignments for the first and second set of variables, then the inner product of and is equal to the number of clauses that the combined assignment does not satisfy. Therefore, to find the assignment that maximizes the number of satisfied clauses, it is enough to find a pair of partial assignments such that the inner product of is minimized. The latter can be easily reduced to calls to an oracle for MostOrthogonal Vectors on vectors in with a standard binary search. ∎
By the above discussion, a lower bound that is based on MostOrthogonal Vectors can be considered stronger than one that is only based on SETH.
3 Hardness for LCS
In this section we provide evidence for the hardness of the Longest Common Subsequence problem, and prove the first item in Theorem 1.
As an intermediate step, we first show evidence that solving a more general version of the problem in strongly subquadratic time is impossible under Conjecture 1.
Definition 8 (Weighted Longest Common Subsequence (Wlcs)).
For two sequences and of length over an alphabet and a weight function , let be the sequence that appears in both as a subsequence and maximizes the expression . We say that is the WLCS of and write . The Weighted Longest Common Subsequence problem asks to output .
Note that a common subsequence of two sequences can be thought of as an alignment or a matching between the two sequences, so that for all , and and . Clearly, the weight of the matching correspond to the length of the weighted length of the common subsequence .
In our proofs, we will find useful the following relation between pairs of indices. For a pair and a pair of indices we say that they are in conflict or they cross if and or and .
3.1 Reducing Wlcs to Lcs
The following simple reduction from WLCS to LCS gives a way to translate a lower bound for WLCS to a lower bound for LCS, and allows us to simplify our proofs.
Lemma 2.
Computing the WLCS of two sequences of length over with weights can be reduced to computing the LCS of two sequences of length over .
Proof.
The reduction simply copies each symbol in each of the sequences times. That is, we define a mapping from symbols in to sequences of length up to so that for any , .
For a sequence of length over , let . That is, replace the symbol with the sequence defined above.
Note that and the reduction follows from the next claim.
Claim 1.
For any two sequences of length over , the mapping satisfies:
Proof.
For brevity of notation, we let and .
First, observe that , since for any common subsequence of , the sequence is a common subsequence of and has length .
In the remainder of this proof, we show that . Let be the LCS of and consider a corresponding matching .
Let . We say that a symbol in at index belongs to interval , iff this symbol was generated when mapping to the subsequence . Moreover, we say that it is at index in interval , iff it is the symbol in that interval.
We will go over the symbols of the alphabet in an arbitrary order, and perform the following modifications to and the matching for each such symbol in turn.
Go over the indices of that are matched in to some index of , and for which , in increasing order. Consider the intervals and , both of which contain the symbol , times. Throughout our scan, we maintain the invariant that: is the first index to be matched to the interval .
If , and the next pairs in our matching are matching the rest of the interval to the interval , we do not need to modify anything, and we move on to the next index that is not a part of this interval and is matched to some index  note that at this point, satisfies the invariant, since it cannot also be matched to the interval by the pigeonhole principal, and therefore and is the first index to be matched to this interval.
Otherwise, we modify so that now the whole intervals and are matched to one another: for each such that , and , we add pair to the matching , and remove any conflicting pairs from . We claim that we obtain a matching of at least the original size, since we add pairs and we remove only up to pairs. To see this, note that for a pair to be in conflict with one of the pairs we added, it must be one of the following three types: (1) and , or (2) but , or (3) but . Here, we use the invariant to rule out pairs for which or . However, in any matching , there cannot be both pairs of type (2) and pairs of type (3), since any such two pairs would cross. Therefore, we conclude that all conflicting pairs either come from the interval or they all come from the interval , and in any case, there are only of them. After this modification, we move on to the next index that is not a part of this interval and is matched (in the new matching ) to some index  as before, this satisfies the invariant.
After we are done with all these modifications, we end up with a matching of size at least in which complete intervals are aligned to each other. Now, we can define a matching between and that contains all pairs for which . In words, we contract the intervals of to the original symbols of . Finally, corresponds to a common subsequence of , and since each matched interval corresponds to some symbol and contributes matches to and a single match of weight to . ∎
∎
3.2 Reducing MostOrthogonal Vectors to Lcs
We are now ready to present our main reduction, proving our hardness result for LCS.
Theorem 4.
MostOrthogonal Vectors on two lists and of binary vectors in dimensions () can be reduced to LCS problem on two sequences of length over an alphabet of size .
Proof.
We will proceed in two steps. First, we will show that WLCS is at least as hard as the MostOrthogonal Vectors problem. Second, given that the symbols in the constructed WLCS instance will have small weights, an application of Lemma 2 will allow as to conclude that LCS is at least as hard as the MostOrthogonal Vectors problem. Our alphabet will be .
We start with the reduction to WLCS. Let denote two vectors from the MostOrthogonal Vectors instance, from the first and the second set, respectively.
We construct our coordinate gadgets as follows. For we define,
Setting the weight function so that .
These gadgets satisfy the following equalities:
Now, we define the vector gadgets as a concatenation of the coordinate gadgets. Let and .
The weight of the symbol is . It is now easy to prove the following claims.
Claim 2.
If two vectors , are far, then:
Proof.
For each , match to optimally to get a weight at least . ∎
Claim 3.
If two vectors , are close, then:
Proof.
is true because we can match the symbols, which gives cost .
Now we prove that . If we match the symbols, then we cannot match any other symbols and the inequality is true. Thus, we assume now that the symbols are not matched.
Now we can check that, if there is a symbol in or that is not matched to a symbol, then we cannot achieve weight even if we match all the other symbols (except for the symbols). Therefore, we assume that all the symbols are matched. The required inequality follows from the fact that there are at least coordinates where and both have (the vectors are close), and the construction of the coordinate gadgets. ∎
Finally, we combine the vector gadgets into two sequences. Let and . Let be a dummy vector of length that is all .
And set the weights so that and .
Let , and .
The following two lemmas prove that there is a gap in the WLCS of our two sequences when there is a pair of vectors that are far as opposed to when there is none.
Lemma 3.
If there is a pair of vectors that are far, then .
Proof.
Let be such that are far. Match and to get a weight of at least . Match the vector gadgets to the left of to the vector gadgets immediately to the left of , and similarly, match the gadgets to the right. The total additional weight we get is at least . Finally, note that after the above matches, only out of the symbols in are surrounded by matched symbols. The remaining symbols can be matched, giving an additional weight of . The total weight is at least . ∎
Lemma 4.
If there is no pair of vectors that are far, then .
Proof.
The main part of the proof will be dedicated to showing that if the vector gadgets in are matched to a substring of vector gadgets from , then must be equal to . This will follow since: if , then at least one of the / symbols in will remain unmatched, and, if , then less than of the symbols in can be matched. The large weights we gave / and make this impossible in an optimal matching. It will be easy to see that in any matching in which , the total weight is at most .
Now, we introduce some notation. Let and define to be the optimal score of matching two sequence where is composed of vector gadgets and is composed of vector gadgets , where no pair are far. Define similarly, except that we restrict the matchings so that all or symbols in (the shorter sequence) must be matched. In the following two claims we prove an upper bound on , via an upper bound on .
Claim 4.
For any integers , we can upper bound .
Proof.
Let be two sequences with vector gadgets, respectively. We will refer to these “vector gadgets” as intervals. Consider an optimal matching of and in which all the and symbols of are matched, i.e., a matching that achieves weight  we will upper bound its weight by . Note that in such a matching, each interval of must be matched completely within one or more intervals of , and each interval of has matches to at most one interval from (otherwise, it must be the case that some or symbol in is not matched).
Let be the number of intervals of that contribute at most to the weight of our optimal matching. Note that any of the other intervals must be matched to a substring of that contains at least two intervals for the following reason. The and symbols of the interval of must be matched, and, if the matching stays within a single interval of and has more than weight, then we have a pair which is far because of Claim 3. Thus, using the fact that there are only intervals in , we get the condition,
We now give an upper bound on the weight of our matching, by summing the contributions of each interval of : there are intervals contributing weight, and there are intervals matched to with unbounded contribution, but we know that even if all the symbols of an interval are matched, it can contribute at most . Therefore, the total weight of the matching can be upper bounded by
We claim that no matter what is, as long as the above condition holds, this expression is less than .
To maximize this expression, we choose the smallest possible that satisfies the above condition, since , which implies that . A key inequality, which we will use multiple times in the proof, following from the fact that the // symbols are much more important than the rest, is that , which follows since .
First, consider the case where , and therefore , which means that all the intervals of might be fully matched. Using that and that , we get the desired upper bound:
Now, assume that , and therefore . In this case, when setting as small as possible, the upper bound becomes:
which is less than , since . ∎
Next, we prove by induction that leaving / symbols in the shorter sequence unmatched will only worsen the weight of the optimal matching.
Claim 5.
For any integers , we can upper bound .
Proof.
We will prove by induction on that: for all such that , .
The base case is when and . Then and we are done.
For the inductive step, assume that the statement is true for all and we will prove it for . Let be so that and and let be sequences with intervals (assignment gadgets), respectively. Consider the optimal (unrestricted) matching of and , denote its weight by . Our goal is to show that .
If every / symbol in is matched then, by definition, the weight cannot be more than , and by Claim 4 we are done. Otherwise, consider the first unmatched / symbol, call it , and there are two cases.
The case: If is the first in , then the first in must be matched to some after (otherwise we can add this pair to the matching without violating any other pairs) which implies that none of the symbols in the interval starting at can be matched, since such matches will be in conflict with the pair containing this first . Otherwise, consider the that appears right before and note that it must be matched to some 2 in , by our choice of as the first unmatched /. Now, there are two possibilities: either there are no more intervals in after , or there is a right after in that is matched to a in that is after (from a later interval in ). Note that in either case, the interval starting at (and ending at the after it) is completely unmatched in our matching. Therefore, in this case, we let be the sequence with intervals which is obtained from by removing the interval starting at . The weight of our matching will not change if we look at it as a matching between and instead of , which implies that . Using our inductive hypothesis we conclude that , since , and we are done.
The case: The at the start of ’s interval must have been matched to some . Let be the at the end of ’s interval. Note that must be matched to some in after , since otherwise, we can add the pair to the matching, gaining a cost of , and the only possible conflicts we would create will be with pairs containing a symbol inside the interval or inside ’s interval, and if we remove all such pairs, we would lose at most which is much less than the gain of  implying that our matching could not have been optimal. Therefore, there are intervals in that are matched to a single interval in : all the intervals starting at the right before and ending at are matched to the interval. Let be the sequence obtained from by removing all these intervals and let be the sequence obtained from by removing the interval. Our matching can be split into two parts: a matching between and , and the matching of the interval to the removed interval. The contribution of the latter part to the weight of the matching can be at most the weight of all the symbols in an interval, which is . By the inductive hypothesis, we know that any matching of and can have weight at most . Summing up the two bounds on the contributions, we get that the total weight of the matching is at most:
However, note that and that , which implies that can be upper bounded by , and we are done. ∎
We are now ready to complete the proof of the Lemma. Consider the optimal matching of and . Let and be the first and last symbols in that are not matched, respectively. Note that there cannot be any matched symbols between and , since otherwise we could match either or and gain extra weight without incurring any loss. Moreover, note that cannot be the first symbol in and cannot be the last one, since those must be matched in an optimal alignment. The substring between the 3 preceding , and the 3 following , contains intervals (vector gadgets) for some . If all the 3’s are matched, we let , and focus on the only interval (vector gadget) of that has matched nonsymbols.
We can now bound the total weight of the matching by the sum of the maximum possible contribution of these intervals, and the contribution of the rest of . The substring before and including the symbol preceding and the substring after and including the symbol following can only contribute ’s to the matching, and they contain exactly such symbols, giving a contribution of . To bound the contribution of the intervals, we use Claim 5: since no symbols are matched in this part, we can “remove” those symbols for the analysis, to obtain two sequences composed of vector gadgets, respectively, in which no pair is far. The contribution of the part, depends on :
If , then by Claim 5, when setting , the contribution is at most and the total weight of our matching can be upper bounded by
which is maximized when is as large as possible, since . Thus, setting , we get the upper bound: .
Otherwise, if , we apply Claim 5 with , and get that the contribution is at most , and the total weight of our matching can be upper bounded by
∎
To conclude our reduction, we note that the largest weight used in our weight function is polynomial in , and therefore the reduction of Lemma 2 gives two unweighted sequences of length , for which the LCS equals the WLCS of our . ∎
4 Hardness for DTWD
In this section, we complete the proof of Theorem 1 by showing that a truly subquadratic algorithm for DTWD implies a truly subquadratic algorithm for the MostOrthogonal Vectors problem.
We first show that we can modify the reduction from CNFSAT to EditDistance from [BI14] so that we get a reduction from MostOrthogonal Vectors to EditDistance. We will later use properties of the two sequences produced in this reduction, call them . In particular, we will show that there is an easy transformation of into a sequence and of into a sequence so that . This will give the desired reduction from MostOrthogonal Vectors to DTWD.
4.1 Reducing MostOrthogonal Vectors to EditDistance
Before showing the reduction from MostOrthogonal Vectors to EditDistance, let us recast the reduction of [BI14] as a reduction from Orthogonal Vectors instead of CNFSAT.
Reducing Orthogonal Vectors to EditDistance.
Instead of having partial assignments for the first half of the variables and partial assignments for the second half of the variables, we have vectors in the first and the second set of vectors (we replace by in the argument). Instead of having clauses, we have coordinates for every vector (we replace by in the argument).
Instead of having clause gadgets, we have coordinate gadgets. For a vector from the first set of vectors and , we define a coordinate gadget,
For a vector from the second set of vectors and ,
We leave the same: .
Instead of assignment gadgets, we have vector gadgets.
where .
Then, we replace the statement “ is satisfied by ” with “vectors and are orthogonal” and the statement “ is satisfiable” with “there is a vector from the first set of variables and a vector from the second set of variables that are orthogonal”.
For a vector and , we have , instead of . We set to have for all .
We define the sequences as
This completes the modification of the argument. We can check that we never use any property of CNFSAT that Orthogonal Vectors does not have.
Reducing MostOrthogonal Vectors to EditDistance.
Next, we modify the construction to show that EditDistance is a hard problem under a weaker assumption, i.e., that the MostOrthogonal Vectors problem does not have a truly subquadratic algorithm (Conjecture 1).
Theorem 5.
EditDistance does not have strongly a subquadratic time algorithm unless MostOrthogonal Vectors problem has a strongly subquadratic algorithm.
Proof.
We describe how to change the arguments from [BI14] to get the necessary reduction. We make all the modifications from the discussion above, as well as the following.
We change as follows,
We replace Lemma 1 from [BI14] with the following lemma.
Lemma 5.
If and are far vectors, then
Proof.
We do the same transformations of sequences as in Lemma 1 from [BI14] except that we get upper bound on the cost. ∎
We replace Lemma 2 from [BI14] with the following lemma.
Lemma 6.
If and are close vectors, then