From: Michael R. Crusoe <michael.crusoe@gmail.com>
Subject: Fix some typos

As caught by Debian's Lintian program
--- python-pomegranate.orig/pomegranate/BayesianNetwork.pyx
+++ python-pomegranate/pomegranate/BayesianNetwork.pyx
@@ -961,7 +961,7 @@
 		is the minimum description length (MDL).
 
 		If not all states for a variable appear in the supplied data, this
-		function can not gurantee that the returned Bayesian Network is optimal
+		function can not guarantee that the returned Bayesian Network is optimal
 		when 'exact' or 'exact-dp' is used. This is because the number of
 		states for each node is derived only from the data provided, and the
 		scoring function depends on the number of states of a variable.
--- python-pomegranate.orig/pomegranate/MarkovNetwork.pyx
+++ python-pomegranate/pomegranate/MarkovNetwork.pyx
@@ -674,7 +674,7 @@
 		is the minimum description length (MDL).
 
 		If not all states for a variable appear in the supplied data, this
-		function can not gurantee that the returned Markov Network is optimal
+		function can not guarantee that the returned Markov Network is optimal
 		when 'exact' or 'exact-dp' is used. This is because the number of
 		states for each node is derived only from the data provided, and the
 		scoring function depends on the number of states of a variable.
--- python-pomegranate.orig/pomegranate/distributions/GammaDistribution.pyx
+++ python-pomegranate/pomegranate/distributions/GammaDistribution.pyx
@@ -68,7 +68,7 @@
 		Set the parameters of this Distribution to maximize the likelihood of
 		the given sample. Items holds some sort of sequence. If weights is
 		specified, it holds a sequence of value to weight each item by.
-		In the Gamma case, likelihood maximization is necesarily numerical, and
+		In the Gamma case, likelihood maximization is necessarily numerical, and
 		the extension to weighted values is not trivially obvious. The algorithm
 		used here includes a Newton-Raphson step for shape parameter estimation,
 		and analytical calculation of the rate parameter. The extension to
@@ -140,7 +140,7 @@
 		"""
 		Set the parameters of this Distribution to maximize the likelihood of
 		the given sample given the summaries which have been stored.
-		In the Gamma case, likelihood maximization is necesarily numerical, and
+		In the Gamma case, likelihood maximization is necessarily numerical, and
 		the extension to weighted values is not trivially obvious. The algorithm
 		used here includes a Newton-Raphson step for shape parameter estimation,
 		and analytical calculation of the rate parameter. The extension to
--- python-pomegranate.orig/pomegranate/distributions/PoissonDistribution.pyx
+++ python-pomegranate/pomegranate/distributions/PoissonDistribution.pyx
@@ -20,7 +20,7 @@
 DEF INF = float("inf")
 
 cdef class PoissonDistribution(Distribution):
-	"""The probability of a number of events occuring in a fixed time window.
+	"""The probability of a number of events occurring in a fixed time window.
 
 	A probability distribution which expresses the probability of a
 	number of events occurring in a fixed time window. It assumes these events
--- python-pomegranate.orig/pomegranate/distributions/distributions.pyx
+++ python-pomegranate/pomegranate/distributions/distributions.pyx
@@ -93,7 +93,7 @@
 		This object will not be tied to any other distribution or connected
 		in any form.
 
-		Paramters
+		Parameters
 		---------
 		None
 
--- python-pomegranate.orig/tutorials/B_Model_Tutorial_1_Distributions.ipynb
+++ python-pomegranate/tutorials/B_Model_Tutorial_1_Distributions.ipynb
@@ -738,7 +738,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "A notable deviant from the numerics logic is the Discrete Distribution. Instead of passing parameters as floats, you instead pass in a dictionary where keys can be any objects and values are the probability of them occurring. Internally the objects that are the keys get converted to integer indices to an array whose values are the probability of that integer occuring. All models store a keymap that converts the inputs to those models to the indices of the discrete distribution. If you try to calculate the log probability of an item not present in the distribution, the default behavior is to return negative infinity, or 0.0 probability."
+    "A notable deviant from the numerics logic is the Discrete Distribution. Instead of passing parameters as floats, you instead pass in a dictionary where keys can be any objects and values are the probability of them occurring. Internally the objects that are the keys get converted to integer indices to an array whose values are the probability of that integer occurring. All models store a keymap that converts the inputs to those models to the indices of the discrete distribution. If you try to calculate the log probability of an item not present in the distribution, the default behavior is to return negative infinity, or 0.0 probability."
    ]
   },
   {
--- python-pomegranate.orig/tutorials/B_Model_Tutorial_2_General_Mixture_Models.ipynb
+++ python-pomegranate/tutorials/B_Model_Tutorial_2_General_Mixture_Models.ipynb
@@ -19,7 +19,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "It is frequently the case that the data you have is not explained by a single underlying distribution. Typically this is because there are multiple phenomena occuring in the data set, each with their own underlying distribution.  If we want to try to recover the underlying distributions, we need to have a model which has multiple components. An example could be sensor readings where the majority of the time a sensor shows no signal, but sometimes it detects some phenomena. Modeling both phenomena as a single distribution would be silly because the readings would come from two distinct phenomena.\n",
+    "It is frequently the case that the data you have is not explained by a single underlying distribution. Typically this is because there are multiple phenomena occurring in the data set, each with their own underlying distribution.  If we want to try to recover the underlying distributions, we need to have a model which has multiple components. An example could be sensor readings where the majority of the time a sensor shows no signal, but sometimes it detects some phenomena. Modeling both phenomena as a single distribution would be silly because the readings would come from two distinct phenomena.\n",
     "\n",
     "A solution to the problem of having more than one single underlying distribution is to use a mixture of distributions instead of a single distribution, commonly called a mixture model. This type of compositional model builds a more complex probability distribution from a set of simpler ones. A common type, called a Gaussian Mixture Model, is composed of Gaussian distributions, but mathematically there is no need for these distributions to all be Gaussian. In fact, there is no need for these distributions to be simple probability distributions.\n",
     "\n",
