Table of Contents

[2016-10-19] frequentist vs bayesian [[probability]]

frequentist probability: relative frequencies
Bayesian probability: degree of knowledge

[2016-02-28] Riemann surfaces [[complan]]

Alternatively one can think of Riemann sheets whereby on crossing a branch cut
one moves onto a different Riemann sheet of the function; the number of branches equals
the number of Riemann sheets. This allows closed contours to be formed by going around
branch cuts as many times as required to get back to the original Riemann sheet. However
although this scheme is very elegant, for calculational purposes it is best to treat branch
cuts as barriers one cannot cross.

[2015-01-27] Elliptic vs Hyperbolic equations [[drill]]

Elliptic equations: well-posed BVP

[2019-07-18] eh? what would be a practical example

Hyperbolic: well-posed IVP. E.g. wave equation

intuition about harmonic function https://math.stackexchange.com/questions/751293/intuitive-significance-of-harmonicity/751459 [[physics]]

A harmonic function is a function whose value at a point is always equal to the average of its values on a sphere centered at that point (reference). This is why they show up as steady-state solutions to the heat equation: if this averaging property weren't true, then heat would be flowing either from or to a point.

You might be interested in reading Needham's Visual Complex Analysis. [[read]]

[2018-09-03] tensor space (product) [[tensor]]

In mathematics, the tensor product V ⊗ W of two vector spaces V and W (over the same field) is itself a vector space, together with an operation of bilinear composition, denoted by ⊗, from ordered pairs in the Cartesian product V × W into V ⊗ W, in a way that generalizes the outer product. The tensor product of V and W is the vector space generated by the symbols v ⊗ w, with v ∈ V and w ∈ W, in which the relations of bilinearity are imposed for the product operation ⊗, and no other relations are assumed to hold.

https://jeremykun.com/2014/01/17/how-to-conquer-tensorphobia/
basically, (a, b) (x) (c, d) is a completely different element unless a is a multiple of c or b is a multiple of d
that's the difference from adiition where (a, b) + (c, d) would be (a + c, b + d)

[2016-06-20] continuity intuition [[topology]]

Consider the statement "a continuous function preserves closeness". That means if f ( x ) f(x) and f ( y ) f(y) are close to each other then x x and y y were close to each other originally.

In analysis, a function is continuous if you can make the image in the codomain as small as you like, by choosing a small enough part of the domain.

https://math.stackexchange.com/questions/15963/what-is-the-intuition-for-the-point-set-topology-definition-of-continuity

Instead, in metric spaces, I think of a function as continuous if it preserves limits, which can be intuitively (and generalizably) be phrased by saying that f is continuous if and only if whenever x is in the closure of a set A, then f(x) is in the closure of the set f(A).

why the reverse doesn't work?

topology as set of rulers https://mathoverflow.net/a/19156

This analogy is a backport from computer science back to geometry, and a bit was lost in the translation. In CS, for open, read "verifiable", and for closed, read "non-verifiable". Termination of blackbox programs is a verifiable property: if someone gives you a program and tells you it halts, then if they're telling the truth, if you wait you'll eventually see the machine stop and know they told you the truth. Nontermination is non-verifiable: no matter how long we wait, we can never be sure that the program won't halt soon, and so we can't verify we were told the truth. –

https://mathoverflow.net/a/19173

mm. ok, function is continuous at p iff p in cl(A) means f(p) in cl(f(A)). okay that does make way more sense

[2018-10-15] https://en.wikipedia.org/wiki/Kuratowski_closure_axioms

[2019-01-26] topological spaces intuition [[topology]]

hmm interesting definition in terms of 'touches' (v)

in terms of closeness https://mathoverflow.net/a/19173/29889

then,

drill this?

compactness??

open set: if no point touches its complement? e.g. with (0, 1) for any specific point we will find interval that separates from [-inf, 0] U [1, inf]

and in reverse: x touches X if for all O in tau contaning x, U intersect X is not empty. ah shit makes sense.

if there is a continuous map to Sierpinski space?
sierpinski space: topologically, {}, {top}, {top, bot}. so, bot v {top} but not in reverse
this is also simplest example for non-symmetry I guess
ugh. this is crap.
https://math.stackexchange.com/questions/31859/what-concept-does-an-open-set-axiomatise
Note that the above description really brings out the special role of the Sierpinski space S. Indeed, a subset of a topological space X is open precisely when the indicator function X→S is continuous.

[2019-01-26] right, I spent more than an hour proving something that could not be proved.. (that 0 touches (0, 1)). It really doesn't e.g. in discrete topology

in terms of computability

'observable' sets are open in the topology
S = {T, bot}

continuity in terms of 'touches' relation: { forall A. x v A => f(x) v f(A) } [[drill]]

connectedness in terms of 'touches' relation: { every cont. function from X ot (0, 1) is const } [[drill]]

Compactness [[topology]]

pdf by Tao compacness and compactification

The extended real line is compact: any sequencexnof extended real numbers willhave a subsequence that either converges to +∞, converges to−∞, or convergesto a finite number. Thus by using this compactification of the real line, we cangeneralise the notion of a limit, by no longer requiring that the limit has to bea real number.

read synthetic topology of data types and classical spaces

[2016-08-14] ODE Integrating factor: y' + f(t) y = g(t)

Homogenous: Sumn=0N An y(n) = 0

[2016-06-20] free structures [[algebra]]

free monoids are lists
free semigroups are nonempty lists?
free magmas are nonempty binary trees (data is in lists)

[2014-07-28] free monoid

A monoid M is freely generated by a subset A of M, if the following conditions hold

[2020-03-22] Shtetl-Optimized » Blog Archive » Ask Me Anything: Apocalypse Edition [[math]]

https://www.scottaaronson.com/blog/?p=4684

As a simple example, I’m totally fine uttering statements like, “a family of Boolean functions with superquadratic gap between randomized and quantum query complexities exists“—even though the type of “existence” that we’re talking about clearly isn’t physical (some would call it Platonic).

[2019-12-26] Lean has real manifolds! | Xena [[lean]]

https://xenaproject.wordpress.com/2019/12/17/lean-has-real-manifolds/#comments

[2019-12-26] Prove a theorem. Write a function. | Xena [[lean]]

https://xenaproject.wordpress.com/2019/12/07/prove-a-theorem-write-a-function/

[2015-02-14] cos(|z|) Not holomorphic anywhere except z = 0? [[complan]]

[2014-07-01] types of equality [[typetheory]]

[2016-10-24] Zeno's paradox: Achilles and Tortoise

The point: infinite sum of positive numbers in not always unbounded.

[2016-10-24] Zeno's paradox: Achilles and Tortoise

The point: infinite sum of positive numbers in not always unbounded.

[2016-10-24] Galileo's Paradox:

First, some numbers are squares, while others are not; therefore, all the numbers, including both squares and non-squares, must be more numerous than just the squares. And yet, for every square there is exactly one positive number that is its square root, and for every number there is exactly one square; hence, there cannot be more of one than of the other. This is an early use, though not the first, of the idea of one-to-one correspondence in the context of infinite sets.

Resolution: different defintions of equally sized sets.

[2016-10-27] Heat equation: U_t = U_xx [[physics]]

[2016-10-26] methods of integration

Euler's method

https://en.wikipedia.org/wiki/Semi-implicit_Euler_method

Euler’s method is easy to understand, but it has one very large problem.
Since the method approximates the solution as a linear equation, the Euler
solution always underestimates the curvature of the solution. The result
is that for any sort of oscillatory motion, the energy of Euler’s solution
increases with time.

Energy drift: https://en.wikipedia.org/wiki/Energy_drift

https://math.stackexchange.com/questions/354342/eulers-method-on-differential-equation

Total error order: dt

Runge-Kutta

Forward Euler

Explicit: Y(t + dt) = F(Y(t))

Might be unstable, stiff equations. The first type of equations is something what is identified as "stiff-problems". We expect exponential dynamical decay into some particular solution

Backward Euler

Implicit: G(Y(t + dt), Y(t)) = 0

[2018-12-03] Is a functional derivative a generalized function? - Mathematics Stack Exchange

https://math.stackexchange.com/questions/908499/is-a-functional-derivative-a-generalized-function

[2018-12-09] so, generally functional derivative is a (functional?) distribution: <dF[f]/df, v> = lim (F(f + eps v) - F(f)) / eps. However if a functional is 'good' enough, it can be represented as normal functional (as an integral).

hmmm cauchy riemann are just like euler lagrance! wonder if there is some direct analogy? [[math]]

Not really.. the derivatives are pretty different

Theorem Proofs: These are the same as computer games. This is the mathematician’s favourite part. The natural number game is what happens if you take all the definitions and theorem statements out of the hands of the user, and just ask them to fill in the proofs, and give them enough of an API to make it fun. Bhavik Mehta made an impressive combinatorics repo and he told me that he never once had to think about how finite sets were actually implemented in Lean — he could just use the interface.

todo
from ip Lean is better for proper maths than all the other theorem provers

null hypothesis [[statistics]]

p-value [[statistics]]

[2016-10-04] random variables [[probability]]

A new random variable Y can be defined by applying a real Borel measurable function {\displaystyle g\colon \mathbb {R} \rightarrow \mathbb {R} } g\colon \mathbb {R} \rightarrow \mathbb {R}  to the outcomes of a real-valued random variable {\displaystyle X} X. The cumulative distribution function of {\displaystyle Y\,\!} Y\,\! is

{\displaystyle F_{Y}(y)=\operatorname {P} (g(X)\leq y).} F_{Y}(y)=\operatorname {P} (g(X)\leq y).
If function {\displaystyle g} g is invertible, i.e., {\displaystyle g^{-1}} g^{-1} exists, and is either increasing or decreasing, then the previous relation can be extended to obtain

{\displaystyle F_{Y}(y)=\operatorname {P} (g(X)\leq y)={\begin{cases}\operatorname {P} (X\leq g^{-1}(y))=F_{X}(g^{-1}(y)),&{\text{if }}g^{-1}{\text{ increasing}},\\\\\operatorname {P} (X\geq g^{-1}(y))=1-F_{X}(g^{-1}(y)),&{\text{if }}g^{-1}{\text{ decreasing}}.\end{cases}}} F_{Y}(y)=\operatorname {P} (g(X)\leq y)={\begin{cases}\operatorname {P} (X\leq g^{-1}(y))=F_{X}(g^{-1}(y)),&{\text{if }}g^{-1}{\text{ increasing}},\\\\\operatorname {P} (X\geq g^{-1}(y))=1-F_{X}(g^{-1}(y)),&{\text{if }}g^{-1}{\text{ decreasing}}.\end{cases}}

[2014-10-22] eigenvalues [[linalg]]

Algebraic multiplicity: multiplicity as a root of the characteristic polynomial.
In case of the field of complex numbers, the sum of algebraic mult. is exactly n.

Geometric multiplicity: dimenstion of the eigenspace, associated with the eigenvalue.

Sum of geometric multiplicities for A - λ I is equal to dim ket (A - λ I)

Jordan cell corresponds to each subspace

[2014-09-27] some functional analysis notes [[funcan]]

L^∞ space
A sequence of bumps of height 1 functions:

f(x) = 1 if x ∈ [n; n + 1], 0 otherwise

Does not have a convergent subsequence!

Fischer-Rietz theorem: Lp is a Banach space.
Dominated convergence theorem.

[2019-11-26] Interactive Linear Algebra | Hacker News

https://news.ycombinator.com/item?id=21628449

[2019-11-21] Differential Equation Solution Strategies | Intuitive Explanations [[diffeq]]

https://intuitiveexplanations.com/math/differential-equation-solution-strategies

[2019-11-26] Immersive Math

http://immersivemath.com/ila/index.html

[2020-04-02] Is there something like this for math? Does anyone know? | Hacker News [[math]]

https://news.ycombinator.com/item?id=16373386
like physics travel guide, but for math

[2019-09-01] Bourbaki dangerous bend symbol - Wikipedia

https://en.wikipedia.org/wiki/Bourbaki_dangerous_bend_symbol

The dangerous bend or caution symbol ☡ (U+2621 ☡ CAUTION SIGN) was created by the Nicolas Bourbaki group of mathematicians and appears in the margins of mathematics books written by the group. It resembles a road sign that indicates a "dangerous bend" in the road ahead, and is used to mark passages tricky on a first reading or with an especially difficult argument.[2]

[2019-05-04] List of computer algebra systems - Wikipedia https://en.wikipedia.org/wiki/List_of_computer_algebra_systems

interesting, there are not that many different computer algebra systems. Also look that sagemath is updated more often than sympy?

[2019-02-26] LMS Popular Lecture Series 2017, 'The Unreasonable Effectiveness of Physics in Maths', David Tong

https://www.youtube.com/watch?v=UVuKyZ4pBzg&list=WL&index=71&t=0s
18:42 if you're a mathematician, you're not gonna escape, the universe will find applications

[2019-11-23] Supertasks - YouTube

https://www.youtube.com/watch?v=ffUnNaQTfZE
good video on hypercomputation and infinite series

[2019-12-26] The Future of Mathematics? - YouTube [[types]] [[math]]

https://www.youtube.com/watch?v=Dp-mQ3HxgDE&list=WL&index=48&t=1s
great talk (advocates Lean prover)

[2019-02-15] linear algebra - Intuitively, what is the difference between Eigendecomposition and Singular Value Decomposition? - Mathematics Stack Exchange

https://math.stackexchange.com/questions/320220/intuitively-what-is-the-difference-between-eigendecomposition-and-singular-valu/320232#320232
difference between svd and eigendecomposition

[2018-07-21] Bayesian and frequentist reasoning in plain English - Cross Validated [[bayes]]

https://stats.stackexchange.com/questions/22/bayesian-and-frequentist-reasoning-in-plain-english#comment3052_56

https://jakevdp.github.io/blog/2014/06/12/frequentism-and-bayesianism-3-confidence-credibility/ [[bayes]]

http://www.behind-the-enemy-lines.com/2008/01/are-you-bayesian-or-frequentist-or.html [[bayes]]

some practical example with beta distribution for bayes vs freq

https://www.youtube.com/watch?v=KhAUfqhLakw [[bayes]]

good talk, nice explanation of bayes vs frequentist
takeaway: bayes os more natural for communicating scientific results to public: model parameter is 95% likely to be within the specific confidence interval

13:00

https://en.wikipedia.org/wiki/Nuisance_parameter
nuiscance parameter – integrated over in bayessian method? https://en.wikipedia.org/wiki/Marginal_likelihood#Bayesian_model_comparison

15:29 conditioning vs marginalization

frequentist taking a slice, bayessian is integrating over (see the pictures)

17:00 confidence vs credibility

freq : If this experiment is repeated many times, in 95% xases the computed xonfidence interval will contain the true parameter. confidence interval varying, parameter fixed.
bayes: Given oyt observed data there is a 95% probability that the value of parameter lies within the credible region. credible region fixed, parameter varying

some interesting point about what people actually mean when debating frequentist vs bayes [[bayes]]

https://www.lesswrong.com/posts/mQfNymou9q5riEKrf/frequentist-vs-bayesian-breakdown-interpretation-vs

http://math.ucr.edu/home/baez/surprises.html surprises in math [[baez]]

[2019-11-11] David Chapman on Twitter: "Excellent explanation of the emotional fallout from the crisis of the foundations of mathematics https://t.co/wxCBgLDizA" / Twitter

">

Excellent explanation of the emotional fallout from the crisis of the foundations of mathematics

[2018-04-02] ?s=20">?s=20 [[math]] [[pde]]

Parabolic PDEs (e.g. heat) smooth out singularities. Hyperbolic PDEs (e.g. wave) displace singularities. 

[2018-12-25] some proofs are better than others when you are working with proof assistants. In a sense this principle also applies here: some ways of deriving are easier for sympy to handle [[math]]

[2020-12-23] Aleph 0 - YouTube [[math]]

excellent videos

Try spaced repetition for basic topology [[topolog]] [[spacedrep]]

[2019-06-20] existence of solutions for IVP dy/dx = f(y, x); y(x0) = y0 [[drill]] [[diffeq]]

if f(y, x) and df/dy(y, x) are continuous around y0, x0, there exists a local solution
Note that the theorem only guarantees the existence of solution nearby the initial values,and one cannot expect the solution to be defined for allx

[2019-12-20] verbit.ru/Job/HSE/Curriculum/all.txt http://verbit.ru/Job/HSE/Curriculum/all.txt

[2018-10-30] Topoi: The Categorial Analysis of Logic (Dover Books on Mathematics): Amazon.co.uk: Robert Goldblatt: 0800759450268: Books

https://www.amazon.co.uk/Topoi-Categorial-Analysis-Logic-Mathematics/dp/0486450260

[2019-05-10] What is Applied Category Theory? | Azimuth https://johncarlosbaez.wordpress.com/2018/09/18/what-is-applied-category-theory/

[2019-05-10] Об Байеса - 2 - Не кинокритик. Не палеонтолог. https://plakhov.livejournal.com/227597.html

[2019-05-04] Sage Manifolds - Wikipedia https://en.wikipedia.org/wiki/Sage_Manifolds

[2019-02-22] The Calculus of Variations | Bounded Rationality https://bjlkeng.github.io/posts/the-calculus-of-variations/

some good explanations for calculus of variations

[2019-11-03] I hate the Pumping Lemma | Bosker Blog https://bosker.wordpress.com/2013/08/18/i-hate-the-pumping-lemma/

[2019-08-27] The Existential Risk of Math Errors - Gwern.net https://www.gwern.net/The-Existential-Risk-of-Mathematical-Error#sn25

Reading great mathematicians like Terence Tao discuss the heuristics they use on unsolved problems25, they bear some resemblances to computer science techniques.

[2019-12-26] The Future of Mathematics? [video] | Hacker News https://news.ycombinator.com/item?id=21200721

Lean: https://leanprover.github.io/
Repo: https://github.com/leanprover/lean/
Chat: https://leanprover.zulipchat.com/
The maths course (in French) that can be seen during the presentation: https://www.math.u-psud.fr/~pmassot/enseignement/math114/

[2019-12-26] Theorem Proving in Lean | Hacker News https://news.ycombinator.com/item?id=17171101

Lean deserves wide recognition. First, it's fast enough for highly interactive theorem development. Second it can also be used as a programming language. And lastly its syntax is pleasant to work with which is important to the experience.
If you have only heard about interactive theorem provers and don't yet have any opinions I'd give Lean a try first. The interactive tutorials are nice and the aforementioned features make it pleasant to work with.

[2019-12-26] Theorem Proving in Lean | Hacker News https://news.ycombinator.com/item?id=17171101

bandali on May 28, 2018 [-]
An Introduction to Lean [0] is another nice (albeit incomplete) tutorial.
There’s a fairly active community over on Zulip [1] if you like to drop by for a chat or get some help.
[0]: https://leanprover.github.io/introduction_to_lean/
[1]: https://leanprover.zulipchat.com

[2019-12-26] A Review of the Lean Theorem Prover | Jigger Wit https://jiggerwit.wordpress.com/2018/09/18/a-review-of-the-lean-theorem-prover/

[2019-12-08] 11.3 - Identifying Outliers (Unusual y Values) | STAT 501 https://newonlinecourses.science.psu.edu/stat501/lesson/11/11.3

An observation with an internally studentized residual that is larger than 3 (in absolute value) is generally deemed an outlier.

[2020-05-12] A 2020 Vision of Linear Algebra | Hacker News

[2020-07-31] Hyperbolica on Steam [[math]] [[games]]

frequentist vs vs bayes probability

https://stats.stackexchange.com/questions/31867/bayesian-vs-frequentist-interpretations-of-probability

The snag is that we have to introduce the prior distribution into our analysis - this reflects our belief about the value of p before seeing the actual values of the Xi. The role of the prior is often criticised in the frequentist approach, as it is argued that it introduces subjectivity into the otherwise austere and object world of probability.
It might also be good to mention that the gap between the frequentist and Bayesian approaches is not nearly as great on a practical level: any frequentist method that produces useful and self-consistent results can generally be given a Bayesian interpretation, and vice versa. In particular, recasting a frequentist calculation in Bayesian terms typically yields a rule for calculating the posterior given some specific prior. One can then ask "Well, is that prior actually a reasonable one to assume?"
You're right about your interpretation of Frequentist probability: randomness in this setup is merely due to incomplete sampling. From the Bayesian viewpoint probabilities are "subjective", in that they reflect an agent's uncertainty about the world. It's not quite right to say that the parameters of the distributions "change". Since we don't have complete information about the parameters, our uncertainty about them changes as we gather more information.
A Bayesian may say that the probability that there was life on Mars a billion years ago is 1/2.
A frequentist will refuse to assign a probability to that proposition. It is not something that could be said to be true in half of all cases, so one cannot assign probability 1/2.
Frequentists posit that the probability of an event is its relative frequency over time,[1] (3.4) i.e., its relative frequency of occurrence after repeating a process a large number of times under similar conditions. This is also known as aleatory probability. The events are assumed to be governed by some random physical phenomena, which are either phenomena that are predictable, in principle, with sufficient information (see determinism); or phenomena which are essentially unpredictable. Examples of the first kind include tossing dice or spinning a roulette wheel; an example of the second kind is radioactive decay. In the case of tossing a fair coin, frequentists say that the probability of getting a heads is 1/2, not because there are two equally likely outcomes but because repeated series of large numbers of trials demonstrate that the empirical frequency converges to the limit 1/2 as the number of trials goes to infinity.

https://www.reddit.com/r/statistics/comments/ywrba/eli5_bayesian_statistics/

Bayesian statistics rely more on computational simulations and have become more common because computers have become much faster. A lot of people are put off by the fact that you incorporate prior believes into your estimate of the "truth", but you can use non-informative (vague) priors like in the example above (each possible outcome gets equal probability) and in any case with a lot of data your priors will become less important.