Re: Like to contribute

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I haven't really gotten involved with existing projects, so I probably
can't do a whole lot for you beyond tell you what kind of thing I'd like
to see.  Since I don't know if you'll get a better answer here, that's
what I'm going to do..  *smile*

Speakup currently depends either on a hardware synthesizer or a software
one (usually flite) initialized after booting the system.  One of the
speakup projects is tuxtalk, which is based on rsynth.  The speech quality
is pretty terrible, and I don't know if it's actually in a usable state
right now.  I'd like to see this improved at least to the point it can be
used long enough to fix a broken system to the point that a better
software synth can be loaded.  I think it'd also be pretty cool to see
tuxtalk and a very simple line reader added to GNU GRUB so that a person
can actually manage their system's startup without vision.


Another kind of cool project out there is speech dispatcher, which aims to
be a better userspace speech server than the typical emacspeak server.
(Emacspeak servers must either be written in TCL or implement a TCL script
parser.  They lack index-reporting (as far as I can tell), and there is no
documentation outside of emacspeak's source code to tell you what commands
do what.  Additionally, because there's no documented and versioned API,
you run into the problem where speech servers are distributed seperately
and for less common synths happen to not actually work anymore..)

The main limitation of Speech Dispatcher at the moment is that it pretty
much uses Festival (not flite, so you'll get better speech and use more
resources to get it.)  It's modular enough that more synth backends can be
written, but right now they pretty much tell you to use Festival.  It'd be
good to see ViaVoice/TTSynth/Eloquence support as another software option,
same with the Linux DECtalk runtime.  Lifting some support from Speakup
for other synths (or even implementing something so that you can use the
Speakup drivers as a backend from Speech Dispatcher) would be nice.


The last one is a personal nit I may address myself one of these days.  I
just can't seem to get the hang of yasr.  I mean, I can tell I've got it
working for some definition of working, but what it reads makes no sense
and what it doesn't read makes less.  Most people use speakup nowadays,
but speakup is pretty much tied to Linux.  yasr theoretically works on
pretty much any UNIX system (I've compiled it on a Mac with a little bit
of work..)  I'd like to see yasr behave the way speakup does, or as close
to it as possible.


In order to explain how I got to these things being important, I probably
have to explain what I've been working with a bit.  I discovered how
pathological emacspeak speech servers were when I tried to make yasr to
something useful on my Linux box and on my Mac, which has a decent enough
software synth available almost immediately even under single-user.
Because yasr wanted an emacspeak server and other things seemed to support
it as an option, I tried to write one based on the eflite source and
reading one of the TCL servers for a synth I knew how the codes for.

Essentially, I never got the hang of yasr to the point I could actually
use it as the only access mechanism, and that was enough to make the
effort of trying to make another emacspeak server no longer worth it.
Emacspeak's speech server API is unversioned, has changed over time so
that the TCL server I was using (Accent) no longer works correctly, is
basically undocumented outside of the emacspeak source code itself, and
requires that you basically write in TCL or write a TCL command processor.
I can't say that eflite's implementation of the latter is robust because I
gave up on it before I actually understood the code.  *grin*

What I am working on that actually got me started down that road is
essentially a software suite similar to that found on most notetakers,
designed with an audio interface.  Emacspeak is cool and all, but it isn't
something a person can just pick up, especially if you're trying to learn
both it and emacs at the same time.  Anyway, the reason for the work on
the Mac is that my laptop is a Powerbook and I do as much development
there as on a Linux desktop.

I'd pretty much decided emacspeak speech servers weren't the way to go, so
that left me looking for alternatives.  Speakup has lots of drivers, but
they're pretty much tied to the Linux kernel.  Speech Dispatcher seems
like the ideal thing to standardize on, but it pretty much requires
Festival at the moment, and Festival can only output to disk on a Mac, and
I don't really like Festival's audio much anyway.  *smile*

So I am doing what everyone else has done thus far for the time being: I
wrote a simple speech API for my own code, and I've written backends for
both Macintalk and a serial DECtalk (which is new--I no longer have a
working Accent unfortunately.)  It works well, and I plan to add support
for Speech Dispatcher later on.  At that point I may port my existing
backends to Speech Dispatcher and migrate to using that exclusively.

What's really funny about all of this is that in the end, the primary
thing that'll do the talking for my software is the soft synth runtime for
either the DECtalk or ViaVoice/TTSynth/Eloquence/whatever, whichever I can
license more affordably for small quantity (under a thousand) distribution
for Linux on the XScale architecture.  I know ViaVoice runs on XScale and
is available for x86 for a low per-unit cost, but I don't currently know
that the people providing the latter support the former.  OTOH I know that
I can get DECtalk software for Linux XScale, but have no idea what it's
going to cost me to do it.  Everything I've done related to getting speech
out of the systems I am currently using is just for development and
because the software would sure be convenient on my laptop if I'm going
through the trouble of writing it anyway.  *smile*


I'm sure you were hoping to hear that things are a little more coordinated
with more cooperative efforts than what I've described.  That's why I am
really pretty excited about things like Speech Dispatcher.  It needs many
more backends before it gets adopted, but if that happens, it could go a
long way toward unifying how we make things talk in the free software and
academic communities.  Of course, lots of drivers doesn't guarantee wide
adoption by projects will automatically follow.

On Tue, Feb 14, 2006 at 11:57:42PM +0530, Deepak Thomas wrote:
> I am a project trainee at IBM and wold like to contribute to the Blinux
> project but I am not sure how. I would love to develop something for my
> project at IBM.Could someone please point me in the right direction.

-- 
"We are what we repeatedly do.  Excellence, therefore, is not an act,
but a habit."
	-- Aristotle


[Index of Archives]     [Linux for the Blind]     [Fedora]     [Kernel List]     [Red Hat Install]     [Red Hat Watch List]     [Red Hat Development]     [Gimp]     [Yosemite News]     [Big List of Linux Books]