Omega Tuesday, Mar 30 2010 

Huh.

So my friend told me about this mathematician/computer scientist at IBM named Gregory Chaitlin. He wrote a book about stuff called “Meta math!” which talks about some computational stuff. It’s a pretty easy read and gets a sense of what he’s working on across fairly accessibly, which is now a field called algorithm information theory. I read a draft of the book off of arxiv in one sitting and had some ideas in the process.

I will now assume you know a bit about AIT from reading this book or whatever.

So first, a stupid little thought: if stuff is digital, then you are partly some Diophantine equation that processes food. By metaphorical reasoning a la Chaitlin, food -> you -> poop, and if we assume there is a string representation of all the info contained in food and shit on a finite digit basis, you would probably be some sort of food processing program operating on those food bits, spitting out poop bits. By some extraordinary abstract bullshit, you would also be an equivalent Diophantine equation. Corollary: kernel(you) = taco bell. [EDIT: so I think I’m probably wrong in this paragraph because the book talks about Diophantine equations being equivalent to some sort of finite axiomatic system I think]

Yay nomber thury.

Second, I had this idea: equivalence up to human experience. I’m not sure how this idea would be formalized, or even if it could be formalized, but I think there is something in the idea worth exploring. The observational tools at our disposal today are much more vast than the tools available centuries ago, so in a sense, the human experience is widening to include the senses we build for ourselves (like the Hubble telescope and the LHC). Chaitlin addresses the ontology of the universe and takes the stance that the universe, at its core, is in fact discrete. Provocative.

But he’s also a computer scientist. Computers can be pretty convincing at feigning reality sometimes. Man, computers are really good at it, actually.

Carl Sagan says we’re made of star stuff! I’m guessing Chaitlin believes we’re made of digital stuff. Depending on your philosophy on consciousness, that could mean our consciousness has a finite digital representation, if you believe in the mundane. Still, I have a feeling that real life might be made up of a quantum alphabet, and that might mean some sort of quantum analog of omega exists, where omega is representable by a sequence (or if quantum programs are not denumerable, by a net or filter or something) of vectors in \mathbb{C}^2.

Last thought: why do so many mathematicians enjoy hiking. Like, hardcore hiking. I don’t get it. Is it a rite of passage? Will I have to fight off a bear for a PHD in math? Will it be a grizzly bear? Or Grizzly Bear, even?

Maybe, just maybe, mathematicians are bears at heart. WHY ARE WE SO AFRAID OF OUR BEAR NATURE? EMBRACE THE PRIMAL. EAT RED MEAT. POOP MORE. WE ARE DIGITAL PEOPLE.

P.S. I’m working on the DSP, I have a chiptune tracker on my GBA, and I’m doing a recording gig for cash. Woot.

Advertisements

Note. Thursday, Mar 25 2010 

Actually, I have a confession to make: bootsie inspired me to program my own dsps.

Still, that phase alignment tool looks pretty tasty. I might have to race you bootsie… but after I finish my lofi dsp.

edit: oh yeah got it to compile and it test-ran smoothly on my DAW. Need to code some more of it now.

More parameters for a lofi dsp Tuesday, Mar 23 2010 

Totally random, but when I finish this analog lofi matrix dsp, it will be called ‘lofnck.’

So some more mathematics, some more fun parameters, and some fun shiiit.

Another way that you can crunchy down a signal is by lowering its firing rate. That sort of thing acts like a wild form of a low-pass filter, or, as they say in the hood, a pretty gangsta’ LP filter.

Basically, we want to dirty down the signal by downsampling it. There’s a natural tendency for sound to move down to lower tones when they’re downsampled enough (that is, its Nyquist frequency goes down and square-like distortion is introduced).

Recall that I modeled discrete signals with a continuous map f:\mathbb{Z} \rightarrow [-1, 1]. Let’s simplify notation a bit and just look at pairs (x, f(x)) \in \mathbb{Z} \times [-1,1]. I believe that the set of these functions, or the function space, is a Banach space (but thankfully, this consideration is far off as I’m not making a synth here, but a stable audio processing unit). Also, I believe there should be a smooth lifting of these functions into \mathbb{R} \rightarrow [-1,1] (which I will use to model analog signals).

The function space is already a Lebesgue space under the measure \mu(X) = |X|. We can define a Lebesgue integral over it (or we can even go ahead and use the stronger example of sequence spaces, since the difference between N and Z in this application is not really important).

An example of a discrete linear approximation of the analog sound I want would be to simply take our DSP to be a map

(x, f(x)) \mapsto (x, (1/|A|)\sum_A f(x)) where A is a partition element of a nice partition of Z containing x.

but the issue with this map is that it’s naive and probably loses a lot of audio quality (God is in the details, as the old folks would say). There are a couple of details about its use as well:

1) In general, you can’t say if the digital signal is convergent, so you can’t guarantee that things would work out because input ‘niceness’ is something you can’t control. On the other hand, most audio signals people will use with this utility will be at the very least reasonable in that sense, being of finite time length and whatnot. So uh, it will work out, basically.

2) It will cost more to do it the nicer way. But CPUs are pretty powerful nowadays, and having quality shit is good even if it runs your CPU a bit hot.

3) Reasonably well mixed audio signals intended for music are going to be pretty super-nice anyways.

4) The parameter is discrete. Let’s make it more analogy.

So we may decide to anti-alias the downsampling and make it go all smoother and gooeyer all analog-like and shit. The signal we have corresponds with a simple function in the standard Lebesgue space on R. This simple function approximates some smooth functions, since simple functions are dense in the space of smooth functions. We want to find such a weak smooth function which can be approximated well using as little CPU as possible.

One thing we may do with such a lifting is to give nicely anti-aliased sound given a real number parameter for the width of the R-partition (or, in this case, it would be floats or doubles in application). So first we operate on the smooth lifting in this way:

(f:\mathbb{R} \rightarrow [-1, 1]) \mapsto (x \mapsto (1/\mu(A)) \int_A f d\mu) where x \in A a \phi-partition element of \mathbb{R}, and \phi is our partition length (which thus induces a downsampling of the analog signal model)

We also want modulated partition lengths to preserve audio stability. As I recall from an analysis class, subsets of \mathbb{R} of the form [a, b) are an important subcollection of Lebesgue-measurable sets in (\mathbb{R}, M, \mu). We may also partition \mathbb{R} with sets of this form and have a map from \mathbb{Z} into such a partition given by \chi: n \mapsto [a, b), \quad a \leq n < b.

So our audio processor is straight up like this: for (x, f(x)) \in [-1, 1]^\mathbb{Z}, \quad (x, f(x)) \mapsto (x, \frac{1}{\mu(A)} \int_A f d\mu) where A = \chi(x)

We get anti-aliasing and stability for free by how sets of the form [a, b) can build a disjoint partition of R.

The little bit missing from this picture is the smooth lifting. I’ll need some time to think on that one. There may be better ways of directly approximating this whole process that would take CPU use down several notches.

Postscript: I actually don’t know anything about audio engineering. 😉 I suppose I will have to let the plugin itself speak for the various ideas being thrown about here

Post-postscript: I got my VST developing environment set up! Experimentation is forthcoming. Watch this blog for free audio plugins, hehe.

edit: I think that what I really want to do is to find a series of polynomials which converge rapidly to the map mentioned above, as polynomials are both analytic and awesome.

edit2: Taylor series should totally work for this shit.

edit3: http://en.wikipedia.org/wiki/Polynomial_interpolation is the way to victory.

edit4: No, no, wait, it’s splines. Blergh.

DSP coding time. Monday, Mar 22 2010 

I’m doing this music project, and I wanted to add math to it, and I also got really into Panthu Du Prance’s “Black Noise” – it’s such a deep musical journey if you listen through it with an open mind; for me, the album felt like my first listens of Boards of Canada’s “Music Has the Right to Children”.

My fourier analysis is a bit rusty right now. Anybody more in tune with the psychoacoustic physics want to lend a hand and drop some tips on some psychoacoustic maps for the input and output signals? I’m coding a vintage-sounding lofi matrix for chiptune uses (so like signal discretification + coloration).

Thinking here for a model of what I’m trying to do – to be honest I’m really just making up terms as we go along based on things I remember: a digital signal would be discrete, so this matter would have to be handled discretely. The discrete signal may be represented mathematically as a continuous function f:\mathbb{Z} \rightarrow [-1, 1] where Z has the discrete topology and [-1, 1] the usual. The codomain, when I code this, will be represented by floats, but the math for floats is similar enough to math for real numbers on a bounded interval that the sound will pretty much be reproduced up to human perception.

Or to say that it’s with hi-fi signals I want to code. I want my shit to sound like vinyl signals being fed through NES hardware. Aw yeah.

Of course, I’ll also have to implement output for the discrete case, since the vst host may not do floats and do 24-bit or 16-bit integers instead (which is kinda weird now since we have 32-bit processors, but whatever, audio industry standards lol).

So let’s think about signal maps. A signal map is a map \phi:[-1, 1]^\mathbb{Z} \rightarrow [-1, 1]^\mathbb{Z}, where [-1, 1]^\mathbb{Z} is the function space for signals as modeled above. We may attempt to fit this function space with some properties, then.

Afterwards, I guess I should try to find a topology on the thing. Or maybe even a Hilbert space. Actually, it could very well be a Hilbert space given the right choices for the norm, addition and scalar multiplication. Though, that’s stuff that I would do just for the hell of doing it.

What I would need to do is to figure out specific signal maps that would make interesting effects for this project, really. Like for example the lofi matrix + analog coloration dsp I wanted to make would probably include discretification at some point. That would be some sort of discontinuous map f(x) = some chopped bits in the float representation of x.

Cool, I have a plan of attack now.

tl;dr: have a lollipop. I’m going to do some research on some shit hell yeah.

edit:

Thinking a bit more on the subject, I think I have an algo for making signals more discrete, given by f(x) = [\rho x]/\rho, where \rho is the granularity of the discretification.

moar edit:

Audio information preservation in the face of discretification is probably closely related to the fact that simple functions are dense in L_0 spaces and the algo I listed above is invariably a simple function (so in a sense density in L_0 suggests that all continuous functions can be approximated this way, which isn’t surprising, but nice and intuitive).

nujabes dead Sunday, Mar 21 2010 

One of my favorite japanese hip hop artists died in a car crash last month.

Rest in peace.

Collatz conjecture Wednesday, Mar 17 2010 

Alright, I’m going to attack the Collatz conjecture. To do so, I’m looking up papers on Hasse’s algorithm, the Syracuse problem, Kakutani’s problem, Ulam’s problem, and the Hailstone problem.

DISREGARD THAT I GOT DRUNK(?)

Mathematics are a GO! Friday, Mar 12 2010 

Dawg, I am a mathematics undergraduate student looking to get deeply involved in – guess – math. So this blog will be about my experiences as a mathematician.

Also wordpress can do \LaTeX and that makes me so excited.

I don’t think I’ll be posting much cutting edge stuff here, being not a super-ultra-badass-of-math, but rather things that catch my attention as I learn about more math. Please point out my errors or discuss what is posted, as the purpose of this blog is to provide a platform for my exploration of mathematics (and yours, I should hope).

And to kick us off: today, I discovered nets in Munkres’ text. A net is kinda like a generalization of sequences in metrizable spaces for general topological spaces. So what does that mean?

Well, let’s try generalizing sequences and see how far we get. As a refresher, sequences in some set S are essentially maps \mathbb{N} \rightarrow S. A convergent sequence satisfies this: there exists an x such that for every \epsilon>0, there exists an N \in \mathbb{N} such that n \geq N implies d(x_n, x) < \epsilon (and of course x is called the limit point). That’s great and all, but this definition depends on the existence of the metric, and so metrizability in spaces is a requirement for this definition of convergence to be applicable. Let’s attempt to revise this definition in more generality: a sequence converges to x iff for every neighborhood U of x, there exists an N \in \mathbb{N} such that n \geq N implies x_n \in U. Are there any problems with this definition?

In a nutshell, yes, or else nets wouldn’t have been invented. Still, all I did was transfer the original notion of how general topology managed to avoid the notion of epsilon by using neighborhoods – intuitively, I want this definition to work. It won’t work in general, though.

Oh, and by ‘work,’ I mean that this definition should be powerful enough so that the following conditions are equivalent:

  1. f is continuous
  2. x_n \rightarrow x \Rightarrow f(x_n) \rightarrow f(x)

We can show (1)=>(2) fairly simply for all spaces, but (2)=>(1) does not hold in general. However, it does hold for first-countable spaces: if we assume that f is not continuous, we can use first-countability to construct a convergent sequence x_n contained in a countable neighborhood basis of x which when mapped onto the codomain does not converge to f(x).

So nets generalize this by widening the index set of the sequence to include uncountable sets so the first-countable requirement on the topological space is removed for (2)=>(1). To define a net, we need a bit of set theory:

A partially ordered set, or a poset, is a set with a relation \leq such that:

  1. a \leq a for every a.
  2. a \leq b and b \leq a iff a = b
  3. a \leq b and b \leq c only if a \leq c .

Note that the relation need not be defined between every pair of elements in the set to satisfy the definition. A partial order does not impose a strong enough structure for use in describing convergence (consider a partial order defined solely on reflexive pairs – wouldn’t be useful for this purpose at all).  We could go ahead then and use totally ordered sets, but in fact a weaker structure would prevail:

A directed set is a poset with the property that for any two elements a, b, there exists an element c such that a \leq c and b \leq c.

So now, a net in X can be defined as a map from J to X. It really does look like a net if you care to mentally picture it, actually. And convergence is pretty much defined the same way – in fact, sequences are nets with the natural numbers representing the index set and the good ol’ less-than-or-equal-to as a total order. Proofs using nets involve an extra complication: generally you will need to prove things for nets in general, and that means stacking a bit of set theory on top of whatever topology you’re doing. On the other hand, you have a nice analog for sequences in topological spaces.

Filters are even more general things, I hear. Maybe I’ll get into that on a different day.