Are priceeight Classes of UPS and FedEx same. Stack Exchange network consists of 181 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. The theory of polyhedra and the dimension of the faces are analyzed by looking at these intersections involving hyperplanes. http://tutorial.math.lamar.edu/Classes/CalcIII/EqnsOfPlanes.aspx Equivalently, This is because your hyperplane has equation y (x1,x2)=w1x1+w2x2-w0 and so y (0,0)= -w0. Further we know that the solution is for some . Moreover, even if your data is only 2-dimensional it might not be possible to find a separating hyperplane ! What were the poems other than those by Donne in the Melford Hall manuscript? with best regards However, we know that adding two vectors is possible, so if we transform m into a vectorwe will be able to do an addition. In mathematics, especially in linear algebra and numerical analysis, the GramSchmidt process is used to find the orthonormal set of vectors of the independent set of vectors. A rotation (or flip) through the origin will We can't add a scalar to a vector, but we know if wemultiply a scalar with a vector we will getanother vector. So to have negative intercept I have to pick w0 positive. When we put this value on the equation of line we got -1 which is less than 0. \begin{equation}\textbf{w}\cdot(\textbf{x}_0+\textbf{k})+b = 1\end{equation}, We can now replace \textbf{k} using equation (9), \begin{equation}\textbf{w}\cdot(\textbf{x}_0+m\frac{\textbf{w}}{\|\textbf{w}\|})+b = 1\end{equation}, \begin{equation}\textbf{w}\cdot\textbf{x}_0 +m\frac{\textbf{w}\cdot\textbf{w}}{\|\textbf{w}\|}+b = 1\end{equation}. Precisely, is the length of the closest point on from the origin, and the sign of determines if is away from the origin along the direction or . Welcome to OnlineMSchool. en. One special case of a projective hyperplane is the infinite or ideal hyperplane, which is defined with the set of all points at infinity. In the image on the left, the scalar is positive, as and point to the same direction. I am passionate about machine learning and Support Vector Machine. Now, these two spaces are called as half-spaces. Answer (1 of 2): I think you mean to ask about a normal vector to an (N-1)-dimensional hyperplane in \R^N determined by N points x_1,x_2, \ldots ,x_N, just as a 2-dimensional plane in \R^3 is determined by 3 points (provided they are noncollinear). Any hyperplane of a Euclidean space has exactly two unit normal vectors. 3. Using the formula w T x + b = 0 we can obtain a first guess of the parameters as. Using this online calculator, you will receive a detailed step-by-step solution to your problem, which will help you understand the algorithm how to find equation of a plane. The objective of the SVM algorithm is to find a hyperplane in an N-dimensional space that distinctly classifies the data points. The components of this vector are simply the coefficients in the implicit Cartesian equation of the hyperplane. can be used to find the dot product for any number of vectors, The two vectors satisfy the condition of the, orthogonal if and only if their dot product is zero. We can replace \textbf{z}_0 by \textbf{x}_0+\textbf{k} because that is how we constructed it. The Gram-Schmidt process (or procedure) is a sequence of operations that enables us to transform a set of linearly independent vectors into a related set of orthogonal vectors that span around the same plan. Here we simply use the cross product for determining the orthogonal. More in-depth information read at these rules. Lets discuss each case with an example. The method of using a cross product to compute a normal to a plane in 3-D generalizes to higher dimensions via a generalized cross product: subtract the coordinates of one of the points from all of the others and then compute their generalized cross product to get a normal to the hyperplane. $$ Math Calculators Gram Schmidt Calculator, For further assistance, please Contact Us. Perhaps I am missing a key point. rev2023.5.1.43405. https://mathworld.wolfram.com/OrthonormalBasis.html, orthonormal basis of {1,-1,-1,1} {2,1,0,1} {2,2,1,2}, orthonormal basis of (1, 2, -1),(2, 4, -2),(-2, -2, 2), orthonormal basis of {1,0,2,1},{2,2,3,1},{1,0,1,0}, https://mathworld.wolfram.com/OrthonormalBasis.html. If wemultiply \textbf{u} by m we get the vector \textbf{k} = m\textbf{u} and : From these properties we can seethat\textbf{k} is the vector we were looking for. Thank you in advance for any hints and Hyperplane :Geometrically, a hyperplane is a geometric entity whose dimension is one less than that of its ambient space. Indeed, for any , using the Cauchy-Schwartz inequality: and the minimum length is attained with . A-143, 9th Floor, Sovereign Corporate Tower, We use cookies to ensure you have the best browsing experience on our website. Example: Let us consider a 2D geometry with Though it's a 2D geometry the value of X will be So according to the equation of hyperplane it can be solved as So as you can see from the solution the hyperplane is the equation of a line. It's not them. Another instance when orthonormal bases arise is as a set of eigenvectors for a symmetric matrix. The notion of half-space formalizes this. Online visualization tool for planes (spans in linear algebra), Improving the copy in the close modal and post notices - 2023 edition, New blog post from our CEO Prashanth: Community is the future of AI. Extracting arguments from a list of function calls. Thanks for reading. a_{\,1} x_{\,1} + a_{\,2} x_{\,2} + \cdots + a_{\,n} x_{\,n} = d s is non-zero and from the vector space to the underlying field. . We all know the equation of a hyperplane is w.x+b=0 where w is a vector normal to hyperplane and b is an offset. in homogeneous coordinates, so that e.g. https://mathworld.wolfram.com/Hyperplane.html, Explore this topic in H In our definition the vectors\mathbf{w} and \mathbf{x} have three dimensions, while in the Wikipedia definition they have two dimensions: Given two 3-dimensional vectors\mathbf{w}(b,-a,1)and \mathbf{x}(1,x,y), \mathbf{w}\cdot\mathbf{x} = b\times (1) + (-a)\times x + 1 \times y, \begin{equation}\mathbf{w}\cdot\mathbf{x} = y - ax + b\end{equation}, Given two 2-dimensionalvectors\mathbf{w^\prime}(-a,1)and \mathbf{x^\prime}(x,y), \mathbf{w^\prime}\cdot\mathbf{x^\prime} = (-a)\times x + 1 \times y, \begin{equation}\mathbf{w^\prime}\cdot\mathbf{x^\prime} = y - ax\end{equation}. Not quite. SVM: Maximum margin separating hyperplane. The determinant of a matrix vanishes iff its rows or columns are linearly dependent. The difference in dimension between a subspace S and its ambient space X is known as the codimension of S with respect to X. To separate the two classes of data points, there are many possible hyperplanes that could be chosen. Find the equation of the plane that passes through the points. That is if the plane goes through the origin, then a hyperplane also becomes a subspace. Solving the SVM problem by inspection. The direction of the translation is determined by , and the amount by . There are many tools, including drawing the plane determined by three given points. In mathematics, a plane is a flat, two-dimensional surface that extends infinitely far. Generating points along line with specifying the origin of point generation in QGIS. If it is so simple why does everybody have so much pain understanding SVM ?It is because as always the simplicity requires some abstraction and mathematical terminology to be well understood. for a constant is a subspace The vectors (cases) that define the hyperplane are the support vectors. The dihedral angle between two non-parallel hyperplanes of a Euclidean space is the angle between the corresponding normal vectors. Plane equation given three points Calculator - High accuracy calculation Partial Functional Restrictions Welcome, Guest Login Service How to use Sample calculation Smartphone Japanese Life Calendar Financial Health Environment Conversion Utility Education Mathematics Science Professional Usually when one needs a basis to do calculations, it is convenient to use an orthonormal basis. Then the set consisting of all vectors. An orthonormal set must be linearly independent, and so it is a vector basis for the space it spans. For example, . Equivalently, a hyperplane in a vector space is any subspace such that is one-dimensional. The dimension of the hyperplane depends upon the number of features. So we can say that this point is on the hyperplane of the line. A Support Vector Machine (SVM) performs classification by finding the hyperplane that maximizes the margin between the two classes. Subspace : Hyper-planes, in general, are not sub-spaces. This notion can be used in any general space in which the concept of the dimension of a subspace is defined. First, we recognize another notation for the dot product, the article uses\mathbf{w}\cdot\mathbf{x} instead of \mathbf{w}^T\mathbf{x}. The savings in effort make it worthwhile to find an orthonormal basis before doing such a calculation. kernel of any nonzero linear map A hyperplane in a Euclidean space separates that space into two half spaces, and defines a reflection that fixes the hyperplane and interchanges those two half spaces. Plane is a surface containing completely each straight line, connecting its any points. Stack Exchange network consists of 181 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Here is the point closest to the origin on the hyperplane defined by the equality . Lets define. the set of eigenvectors may not be orthonormal, or even be a basis. How to get the orthogonal to compute the hessian normal form in higher dimensions? In mathematics, people like things to be expressed concisely. Consider two points (1,-1). That is, it is the point on closest to the origin, as it solves the projection problem. Which means equation (5) can also bewritten: \begin{equation}y_i(\mathbf{w}\cdot\mathbf{x_i} + b ) \geq 1\end{equation}\;\text{for }\;\mathbf{x_i}\;\text{having the class}\;-1. In task define: X 1 n 1 + X 2 n 2 + b = 0. Is there a dissection tool available online? b3) . However, here the variable \delta is not necessary. So we have that: Therefore a=2/5 and b=-11/5, and . The simplest example of an orthonormal basis is the standard basis for Euclidean space . In 2D, the separating hyperplane is nothing but the decision boundary. The calculator will instantly compute its orthonormalized form by applying the Gram Schmidt process. So the optimal hyperplane is given by. So we can set \delta=1 to simplify the problem. 1. Under 20 years old / High-school/ University/ Grad student / Very /, Checking answers to my solution for assignment, Under 20 years old / High-school/ University/ Grad student / A little /, Stuck on calculus assignment sadly no answer for me :(, 50 years old level / A teacher / A researcher / Very /, Under 20 years old / High-school/ University/ Grad student / Useful /. Orthogonality, if they are perpendicular to each other. Does a password policy with a restriction of repeated characters increase security? The. So w0=1.4 , w1 =-0.7 and w2=-1 is one solution. Gram-Schmidt process (or procedure) is a sequence of operations that enables us to transform a set of linearly independent vectors into a related set of orthogonal vectors that span around the same plan. For example, given the points $(4,0,-1,0)$, $(1,2,3,-1)$, $(0,-1,2,0)$ and $(-1,1,-1,1)$, subtract, say, the last one from the first three to get $(5, -1, 0, -1)$, $(2, 1, 4, -2)$ and $(1, -2, 3, -1)$ and then compute the determinant $$\det\begin{bmatrix}5&-1&0&-1\\2&1&4&-2\\1&-2&3&-1\\\mathbf e_1&\mathbf e_2&\mathbf e_3&\mathbf e_4\end{bmatrix} = (13, 8, 20, 57).$$ An equation of the hyperplane is therefore $(13,8,20,57)\cdot(x_1+1,x_2-1,x_3+1,x_4-1)=0$, or $13x_1+8x_2+20x_3+57x_4=32$. Tool for doing linear algebra with algebra instead of numbers, How to find the points that are in-between 4 planes. In mathematics, especially in linear algebra and numerical analysis, the GramSchmidt process is used to find the orthonormal set of vectors of the independent set of vectors. If three intercepts don't exist you can still plug in and graph other points. So their effect is the same(there will be no points between the two hyperplanes). As \textbf{x}_0 is in \mathcal{H}_0, m is the distance between hyperplanes \mathcal{H}_0 and \mathcal{H}_1 . So let's look at Figure 4 below and consider the point A. For instance, a hyperplane of an n-dimensional affine space is a flat subset with dimension n 1[1] and it separates the space into two half spaces. Then I would use the vector connecting the two centres of mass, C = A B. as the normal for the hyper-plane. Could a subterranean river or aquifer generate enough continuous momentum to power a waterwheel for the purpose of producing electricity? It only takes a minute to sign up. What do we know about hyperplanes that could help us ? The Support Vector Machine (SVM) is a linear classifier that can be viewed as an extension of the Perceptron developed by Rosenblatt in 1958. of called a hyperplane. This is the Part 3 of my series of tutorials about the math behind Support Vector Machine. Hyperplanes are very useful because they allows to separate the whole space in two regions. Once we have solved it, we will have foundthe couple(\textbf{w}, b) for which\|\textbf{w}\| is the smallest possible and the constraints we fixed are met. Thank you for your questionnaire.Sending completion, Privacy Notice | Cookie Policy |Terms of use | FAQ | Contact us |, 30 years old level / An engineer / Very /. But with some p-dimensional data it becomes more difficult because you can't draw it. Optimization problems are themselves somewhat tricky. A plane can be uniquely determined by three non-collinear points (points not on a single line). Moreover, they are all required to have length one: . basis, there is a rotation, or rotation combined with a flip, which will send the Equivalently, a hyperplane is the linear transformation kernel of any nonzero linear map from the vector space to the underlying field . Let consider two points (-1,-1). In the last blog, we covered some of the simpler vector topics. Given 3 points. the last component can "normally" be put to $1$. The same applies for B. 0:00 / 9:14 Machine Learning Machine Learning | Maximal Margin Classifier RANJI RAJ 47.4K subscribers Subscribe 11K views 3 years ago Linear SVM or Maximal Margin Classifiers are those special. Projective hyperplanes, are used in projective geometry. 10 Example: AND Here is a representation of the AND function The more formal definition of an initial dataset in set theory is : \mathcal{D} = \left\{ (\mathbf{x}_i, y_i)\mid\mathbf{x}_i \in \mathbb{R}^p,\, y_i \in \{-1,1\}\right\}_{i=1}^n. If I have an hyperplane I can compute its margin with respect to some data point. The Gram-Schmidt process (or procedure) is a chain of operation that allows us to transform a set of linear independent vectors into a set of orthonormal vectors that span around the same space of the original vectors. Our goal is to maximize the margin. The two vectors satisfy the condition of the orthogonal if and only if their dot product is zero. for instance when you do text classification, Wikipedia article aboutSupport Vector Machine, unconstrained minimization problems in Part 4, SVM - Understanding the math - Unconstrained minimization. By clicking Accept all cookies, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. If I have a margin delimited by two hyperplanes (the dark blue lines in. Precisely, an hyperplane in is a set of the form. How do I find the equations of a hyperplane that has points inside a hypercube? The original vectors are V1,V2, V3,Vn. When we are going to find the vectors in the three dimensional plan, then these vectors are called the orthonormal vectors. A subset The orthogonal matrix calculator is an especially designed calculator to find the Orthogonalized matrix. The domain is n-dimensional, but the range is 1d. that is equivalent to write It can be convenient for us to implement the Gram-Schmidt process by the gram Schmidt calculator. Here is a quick summary of what we will see: At the end of Part 2 we computed the distance \|p\| between a point A and a hyperplane. Such a hyperplane is the solution of a single linear equation. Browse other questions tagged, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site. Why typically people don't use biases in attention mechanism? How to force Unity Editor/TestRunner to run at full speed when in background? I like to explain things simply to share my knowledge with people from around the world. And it works not only in our examples but also in p-dimensions ! Surprisingly, I have been unable to find an online tool (website/web app) to visualize planes in 3 dimensions. These two equations ensure that each observation is on the correct side of the hyperplane and at least a distance M from the hyperplane. Subspace of n-space whose dimension is (n-1), Polytopes, Rings and K-Theory by Bruns-Gubeladze, Learn how and when to remove this template message, "Excerpt from Convex Analysis, by R.T. Rockafellar", https://en.wikipedia.org/w/index.php?title=Hyperplane&oldid=1120402388, All Wikipedia articles written in American English, Short description is different from Wikidata, Articles lacking in-text citations from January 2013, Creative Commons Attribution-ShareAlike License 3.0, Victor V. Prasolov & VM Tikhomirov (1997,2001), This page was last edited on 6 November 2022, at 20:40. This surface intersects the feature space. + (an.bn) can be used to find the dot product for any number of vectors. Here is a screenshot of the plane through $(3,0,0),(0,2,0)$, and $(0,0,4)$: Relaxing the online restriction, I quite like Grapher (for macOS). We found a way to computem. We now have a formula to compute the margin: The only variable we can change in this formula is the norm of \mathbf{w}. Is there any known 80-bit collision attack? I was trying to visualize in 2D space. 2) How to calculate hyperplane using the given sample?. Right now you should have thefeeling that hyperplanes and margins are closely related. Point-Plane Distance Download Wolfram Notebook Given a plane (1) and a point , the normal vector to the plane is given by (2) and a vector from the plane to the point is given by (3) Projecting onto gives the distance from the point to the plane as Dropping the absolute value signs gives the signed distance, (10) (When is normalized, as in the picture, .). A square matrix with a real number is an orthogonalized matrix, if its transpose is equal to the inverse of the matrix. If the null hypothesis is never really true, is there a point to using a statistical test without a priori power analysis. The process looks overwhelmingly difficult to understand at first sight, but you can understand it by finding the Orthonormal basis of the independent vector by the Gram-Schmidt calculator. When we put this value on the equation of line we got 0. $$ proj_\vec{u_1} \ (\vec{v_2}) \ = \ \begin{bmatrix} 2.8 \\ 8.4 \end{bmatrix} $$, $$ \vec{u_2} \ = \ \vec{v_2} \ \ proj_\vec{u_1} \ (\vec{v_2}) \ = \ \begin{bmatrix} 1.2 \\ -0.4 \end{bmatrix} $$, $$ \vec{e_2} \ = \ \frac{\vec{u_2}}{| \vec{u_2 }|} \ = \ \begin{bmatrix} 0.95 \\ -0.32 \end{bmatrix} $$. We need a few de nitions rst. One of the pleasures of this site is that you can drag any of the points and it will dynamically adjust the objects you have created (so dragging a point will move the corresponding plane). Hyperplanes are very useful because they allows to separate the whole space in two regions. To find the Orthonormal basis vector, follow the steps given as under: We can Perform the gram schmidt process on the following sequence of vectors: U3= V3- {(V3,U1)/(|U1|)^2}*U1- {(V3,U2)/(|U2|)^2}*U2, Now U1,U2,U3,,Un are the orthonormal basis vectors of the original vectors V1,V2, V3,Vn, $$ \vec{u_k} =\vec{v_k} -\sum_{j=1}^{k-1}{\frac{\vec{u_j} .\vec{v_k} }{\vec{u_j}.\vec{u_j} } \vec{u_j} }\ ,\quad \vec{e_k} =\frac{\vec{u_k} }{\|\vec{u_k}\|}$$. By definition, m is what we are used to call the margin. To define an equation that allowed us to predict supplier prices based on three cost estimates encompassing two variables. From MathWorld--A Wolfram Web Resource, created by Eric Algorithm: Define an optimal hyperplane: maximize margin; Extend the above definition for non-linearly separable problems: have a penalty term . MathWorld--A Wolfram Web Resource. Imposing then that the given $n$ points lay on the plane, means to have a homogeneous linear system ', referring to the nuclear power plant in Ignalina, mean? A hyperplane is n-1 dimensional by definition. So we can say that this point is on the positive half space. (recall from Part 2 that a vector has a magnitude and a direction). Is there any known 80-bit collision attack? An affine hyperplane is an affine subspace of codimension 1 in an affine space. Disable your Adblocker and refresh your web page . send an orthonormal set to another orthonormal set. A great site is GeoGebra. It can be represented asa circle : Looking at the picture, the necessity of a vector become clear. We did it ! You can see that every timethe constraints are not satisfied (Figure 6, 7 and 8) there are points between the two hyperplanes. hyperplane theorem and makes the proof straightforward. How easy was it to use our calculator? An equivalent method uses homogeneous coordinates. Finding the biggest margin, is the same thing as finding the optimal hyperplane. Calculator Guide Some theory Distance from point to plane calculator Plane equation: x + y + z + = 0 Point coordinates: M: ( ,, ) Using these values we would obtain the following width between the support vectors: 2 2 = 2. The Cramer's solution terms are the equivalent of the components of the normal vector you are looking for. Why refined oil is cheaper than cold press oil? Usually when one needs a basis to do calculations, it is convenient to use an orthonormal basis. Short story about swapping bodies as a job; the person who hires the main character misuses his body, Canadian of Polish descent travel to Poland with Canadian passport. The two vectors satisfy the condition of the Orthogonality, if they are perpendicular to each other. For a general matrix, In convex geometry, two disjoint convex sets in n-dimensional Euclidean space are separated by a hyperplane, a result called the hyperplane separation theorem. You might wonderWhere does the +b comes from ? If I have an hyperplane I can compute its margin with respect to some data point. In homogeneous coordinates every point $\mathbf p$ on a hyperplane satisfies the equation $\mathbf h\cdot\mathbf p=0$ for some fixed homogeneous vector $\mathbf h$. import matplotlib.pyplot as plt from sklearn import svm from sklearn.datasets import make_blobs from sklearn.inspection import DecisionBoundaryDisplay . From our initial statement, we want this vector: Fortunately, we already know a vector perpendicular to\mathcal{H}_1, that is\textbf{w}(because \mathcal{H}_1 = \textbf{w}\cdot\textbf{x} + b = 1). A projective subspace is a set of points with the property that for any two points of the set, all the points on the line determined by the two points are contained in the set. The product of the transformations in the two hyperplanes is a rotation whose axis is the subspace of codimension2 obtained by intersecting the hyperplanes, and whose angle is twice the angle between the hyperplanes. . This happens when this constraint is satisfied with equality by the two support vectors. You can write the above expression as follows, We can find the orthogonal basis vectors of the original vector by the gram schmidt calculator. Our objective is to find a plane that has . Here b is used to select the hyperplane i.e perpendicular to the normal vector. For example, the formula for a vector I simply traced a line crossing M_2 in its middle. Because it is browser-based, it is also platform independent. How did I find it ? It is red so it has the class1 and we need to verify it does not violate the constraint\mathbf{w}\cdot\mathbf{x_i} + b \geq1\. In geometry, a hyperplane of an n-dimensional space V is a subspace of dimension n1, or equivalently, of codimension1 inV. The space V may be a Euclidean space or more generally an affine space, or a vector space or a projective space, and the notion of hyperplane varies correspondingly since the definition of subspace differs in these settings; in all cases however, any hyperplane can be given in coordinates as the solution of a single (due to the "codimension1" constraint) algebraic equation of degree1. The same applies for D, E, F and G. With an analogous reasoning you should find that the second constraint is respected for the class -1. Page generated 2021-02-03 19:30:08 PST, by. You will gain greater insight if you learn to plot and visualize them with a pencil. Once again it is a question of notation. There is an orthogonal projection of a subspace onto a canonical subspace that is an isomorphism. A hyperplane is a set described by a single scalar product equality. [3] The intersection of P and H is defined to be a "face" of the polyhedron. "Hyperplane." 0 & 0 & 1 & 0 & \frac{5}{8} \\ I designed this web site and wrote all the mathematical theory, online exercises, formulas and calculators. How to determine the equation of the hyperplane that contains several points, http://tutorial.math.lamar.edu/Classes/CalcIII/EqnsOfPlanes.aspx, Improving the copy in the close modal and post notices - 2023 edition, New blog post from our CEO Prashanth: Community is the future of AI. It starts in 2D by default, but you can click on a settings button on the right to open a 3D viewer. However, in the Wikipedia article aboutSupport Vector Machine it is saidthat : Any hyperplane can be written as the set of points \mathbf{x} satisfying \mathbf{w}\cdot\mathbf{x}+b=0\. The dot product of a vector with itself is the square of its norm so : \begin{equation}\textbf{w}\cdot\textbf{x}_0 +m\frac{\|\textbf{w}\|^2}{\|\textbf{w}\|}+b = 1\end{equation}, \begin{equation}\textbf{w}\cdot\textbf{x}_0 +m\|\textbf{w}\|+b = 1\end{equation}, \begin{equation}\textbf{w}\cdot\textbf{x}_0 +b = 1 - m\|\textbf{w}\|\end{equation}, As \textbf{x}_0isin \mathcal{H}_0 then \textbf{w}\cdot\textbf{x}_0 +b = -1, \begin{equation} -1= 1 - m\|\textbf{w}\|\end{equation}, \begin{equation} m\|\textbf{w}\|= 2\end{equation}, \begin{equation} m = \frac{2}{\|\textbf{w}\|}\end{equation}. P The biggest margin is the margin M_2shown in Figure 2 below. Setting: We define a linear classifier: h(x) = sign(wTx + b . This answer can be confirmed geometrically by examining picture. the MathWorld classroom, https://mathworld.wolfram.com/Hyperplane.html. FLOSS tool to visualize 2- and 3-space matrix transformations, software tool for accurate visualization of algebraic curves, Finding the function of a parabolic curve between two tangents, Entry systems for math that are simpler than LaTeX.