In this study, we introduce the ideal convergence of double and multiple sequences in cone metric spaces over topological vector spaces. In 2008, Sencimen and Pehlivan [24] introduced the concepts of statistically convergent sequence and statistically Cauchy sequence in the probabilistic metric space endowed with strong topology. The idea of statistical convergence was first introduced by Steinhaus [25] for real sequences and developed by Fast [7], then reintroduced by Shoenberg [22]. Many authors, such as [4, 6, 8, 9, 17, 21], have discussed and developed this concept. The theory of statistical convergence has many applications in various fields such as approximation theory [5], finitely additive set functions [4], trigonometric series [27], and locally convex spaces [11]. We analyze some characteristics derived from convergence and Cauchyness of sequences.

In this paper we have introduced the concept of statistically convergent sequence in case of cone metric space and constructed statistically convergent, Cauchy and complete cone metric space and some theorems based on them. Consequently we have generalised several results in cone metric spaces from metric spaces. Note that almost uniform convergence of a sequence does not mean that the sequence converges uniformly almost everywhere as might be inferred from the name.

## Definition of a convergent sequence in a metric space

The equivalence between these two definitions can be seen as a particular case of the Monge-Kantorovich duality. From the two definitions above, it is clear that the total variation distance between probability measures is always between 0 and 2. Every statistically convergent sequence in a PGM-space is statistically Cauchy. Every statistically convergent sequence in a PGM-space has a convergent subsequence.

The aim of this paper is to propose a new space called partial cone b-metric space by using both the notions of cone b-metric spaces and partial metric spaces and by defining asymptotically regular maps and sequences. Our results extend and generalize some interesting results of [11] and [21] in partial cone b-metric space. The concept of the generalized metric space (briefly G-metric space) was introduced by Mustafa and Sims in 2006 [16]. Then, in 2014, Zhou et al. [26] generalized the notion of PM-space to the G-metric spaces and defined the probabilistic generalized metric space which is denoted by PGM-space. In mathematics and statistics, weak convergence is one of many types of convergence relating to the convergence of measures.

## On completeness and compactness in fuzzy metric spaces

Next, we generalize the concept of asymptotic density of a set in an l-dimensional case. For more information about statistical convergence, the references [2, 4, 7–10, 13–15, 18–20] can be addressed. The sequence $x_1, x_2, x_3, \ldots, x_n, \ldots$ can be thought of as a set of approximations to $l$, in which the higher the $n$ the better the approximation. Note, however, that one must take care to use this alternative notation only in contexts in which the sequence is known to have a limit.

- The following proposition (as well as being an important fact) is a useful exercise in how to use the axioms of a metric space in proofs.
- In this section, some basic definitions and results related to PM-space, PGM-space, and statistical convergence are presented and discussed.
- We first define uniform convergence for real-valued functions, although the concept is readily generalized to functions mapping to metric spaces and, more generally, uniform spaces (see below).
- Therefore we may continue to use positive definite second derivative approximations and there is no need to introduce any penalty terms.

Here the supremum is taken over f ranging over the set of all measurable functions from X to [−1, 1]. In the case where X is a Polish space, the total variation metric coincides with the Radon metric. We establish subgeometric bounds on convergence rate of general Markov processes in the Wasserstein metric.

A central theme in this book is the study of the observable distance between metric measure spaces, defined by the difference between 1-Lipschitz functions on one space and those on the other. One of the main parts of this presentation is the discussion of a natural compactification of the completion of the space of metric measure spaces. In this paper, we introduce the concept of d-point in cone metric spaces and characterize cone completeness in terms of this notion. Using Morera’s Theorem, one can show that if a sequence of analytic functions converges uniformly in a region S of the complex plane, then the limit is analytic in S. This example demonstrates that complex functions are more well-behaved than real functions, since the uniform limit of analytic functions on a real interval need not even be differentiable (see Weierstrass function).

So both x and f(x) are to belong to metric spaces, but there’s no reason why they should belong to the same space. In this study, we introduce the ordinary and statistical convergence of double and multiple sequences in cone metric spaces. Moreover, the relationships between these convergence types are also invastigated. We first define uniform convergence for real-valued functions, although the concept is readily generalized to functions mapping to metric spaces and, more generally, uniform spaces (see below).

If every statistically Cauchy sequence is statistically convergent, then \((X, F,T)\) is said to be statistically complete. The following theorem shows that if a sequence is statistically convergent to a point in X, then that point is unique. $d$ and $d’$ generate the same topology on $X$ so have the same convergent sequences in $X$ by definition. And $d$ and $d’$ have the same open sets by definition of equivalent topologies. Almost uniform convergence implies almost everywhere convergence and convergence in measure. Much stronger theorems in this respect, which require not much more than pointwise convergence, can be obtained if one abandons the Riemann integral and uses the Lebesgue integral instead.

The erroneous claim that the pointwise limit of a sequence of continuous functions is continuous (originally stated in terms of convergent series of continuous functions) is infamously known as “Cauchy’s wrong theorem”. The uniform limit theorem shows that a stronger form of convergence, uniform convergence, is needed to ensure the preservation convergence metric of continuity in the limit function. Having defined convergence of sequences, we now hurry on to define continuity for functions as well. When we talk about continuity, we mean that f(x) gets close to f(y) as x gets close to y. In other words, we are measu

ring the distance between both f(x) and f(y) and between x and y.