| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
| |
Uses a # to mark section titles, and a >> to indicate code sections,
which otherwise had no formatting to distinguish them from the rest of
the text. This helps with the conversion of the puzzles help text to a
format the sgt-puzzles rockbox port can use.
|
| |
|
|
|
|
| |
I ran into this as a knock-on effect from having misspelled a section
name identifier (i.e. I had a \k{nonexistent-name} in my document).
Before I fix the document, I should fix the segfault!
|
| |
|
|
|
|
|
|
|
|
|
| |
I just happened to run across this clearly unfinished paragraph in
build_huffman_tree(), and when I wrote the rest of it, I realised that
there was actually an implicit input constraint which I hadn't
documented, relating the size of the symbol alphabet to the upper
bound on Huffman code length. (Fortunately, Deflate never violates
that constraint, because both of those values are constant in every
Huffman tree it builds.) So I've also added a pair of assertions, one
of which enforces that constraint.
|
| |
|
|
|
|
|
| |
This is a function I should have introduced a lot earlier while
writing the CHM output code, because I ended up with quite a lot of
annoying loops to add zero-padding of various sizes by going round and
round on the one-byte rdaddc().
|
| |
|
|
|
| |
Also updates libcharset to the latest revision, which updates _its_
.gitignore (and pulls in other previous fixes too).
|
| |
|
|
|
|
|
| |
I've just set up a script that does code signing by a more sensible
method for cross-compiled Windows builds (i.e. still using the same
underlying technology, but not bothering to fire up a whole Windows
delegation environment that won't get used). So now I can use it.
|
| |
|
|
|
|
|
| |
I don't know how I managed to add a non-TLS URL in this new
code-signing command at around the same time that I'd just switched
over all the ones in my other projects. Must have copied and pasted
from an un-updated checkout, I suppose!
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit updates the libcharset submodule to incorporate the
autotools-ification that I just pushed to that subproject, and builds
on it by replacing Halibut's own makefile system similarly with an
autotools setup.
The new Makefile.am incorporates both of the old Makefile and
doc/Makefile, so a single run of 'make' should now build Halibut
itself and all the formats of its own documentation, which also means
that the automake-generated 'make install' target can do the right
thing in terms of putting an appropriate subset of those documentation
formats in the assorted installation directories.
The old Makefiles are gone, as is release.sh (which is now obsolete
because autotools's 'make dist' doesn't do anything obviously wrong).
The bob build script is comprehensively rewritten, but should still
work - even the clang-based Windows build can use the
autotools-generated makefile system, provided I do the libcharset
build with a manual override of bin_PROGRAMS to prevent it trying to
build the libcharset supporting utilities (which are not completely
Windows-portable).
|
| |
|
|
|
| |
Thanks to Leah Neukirchen for pointing out that it was left out of
that special-case rule.
|
| |
|
|
|
|
| |
Naturally, after calling the previous commit '1.2', someone had to
instantly find a bug that shows up in the simplest possible example
Halibut invocation :-(
|
| |
|
|
|
| |
I think it's fair to say that I've added substantial new code this
year.
|
| |
|
|
|
|
|
|
|
| |
Its purpose was as a convenient way to make the full set of test
output files from inputs/test.but, _even_ the one that needed help
from a Windows (or close enough) build machine to run HHC. But now we
can generate CHM directly, the latter is not necessary, so this build
script is no longer doing anything you can't do just as easily by just
running 'halibut inputs/test.but'.
|
| |
|
|
|
| |
Now that I'm building a .chm as part of the default doc/Makefile
target, I should ignore it.
|
| |
|
|
|
|
|
|
|
|
|
| |
Most of the changes since Halibut's last update have been to
libcharset's supporting utility collection or have added extra API
functions that Halibut doesn't need, but one actually relevant thing
that this change brings in is the expanded set of easy-to-type
character set encoding names, so that for example you can now say
-Ctext-charset:mac-roman where you would previously have had to put a
space in the middle of 'Mac Roman' and faff about with quoting on the
shell command line.
|
| |
|
|
|
|
| |
I forgot to add the new --chm option to it yesterday, and also I've
just noticed that it still describes --html as XHTML only, which
hasn't been true for years.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
The one in wcwidth.c actually came up in one of my valgrind runs: if
you passed it a non-null-terminated wide string (specifically, one
that reaches invalid memory exactly when the length parameter runs
out), it would illegally load the character beyond the end of the
string before noticing that the length parameter said it shouldn't.
The one in bk_man.c may well not be able to come up at all, but I
spotted it in passing and I thought I might as well fix it - it makes
me twitch on general principles to see any use of buf[len-1] without
having checked len>0 first.
|
| |
|
|
|
|
|
|
|
| |
This was another valgrind-spotted uninitialised variable. And although
you'd think the 'filetype' field is unimportant for fonts that aren't
loaded from a file anyway, it is important, because bk_pdf.c emits
different code for creating font subsets of Type 1 and TrueType fonts
- i.e. it needs to know what type the font has _after_ it's loaded,
not just how to load it.
|
| |
|
|
|
|
| |
If you initialise a structure field for the first time with += rather
than =, that won't stop valgrind from saying it's uninitialised, and a
good job too!
|
| |
|
|
|
|
|
|
|
|
|
| |
The 'breaks' and 'aux' fields were filled in rather inconsistently at
various places where a word is created - especially the outlying ones
that manufacture pieces of document during internal processing of
contents, index, bibliography, cross-references etc rather than
directly from the input file. This has never led to any user-visible
behaviour change that I've noticed, but it made a lot of annoying
noise in the valgrind output, which got in my way last week when I was
trying to debug the CHM generation.
|
| |
|
|
|
|
| |
Or rather, clang in MS-targeted code generation but still with the
Unix-style command line, which lets me use the existing Makefile with
almost no change.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I became aware a few months ago that enough is known about CHM files
that free software _can_ write them without benefit of the MS HTML
Help compiler - in particular there's a thing called 'chmcmd' in the
Free Pascal Compiler software distribution which is more or less a
drop-in replacement for hhc.exe itself.
But although depending on chmcmd would be a bit nicer than depending
on hhc.exe, Halibut has always preferred to do the whole job itself if
it can. So here's my own from-scratch code to generate CHM directly
from Halibut source.
The new output mode is presented as a completely separate top-level
thing independent of HTML mode. Of course, in reality, the two back
ends share all of the HTML-generation code, differing only in a few
configuration defaults and the minor detail of what will be _done_
with each chunk of HTML as it's generated (this is what the recent
refactoring in b3db1cce3 was in aid of). But even so, the output modes
are properly independent from a user-visible-behaviour perspective:
they use parallel sets of config directives rather than sharing the
same ones (you can set \cfg{html-foo} and \cfg{chm-foo} independently,
for a great many values of 'foo'), and you can run either or neither
or both as you choose in a given run of Halibut.
The old HTML Help support, in the form of some config directives for
HTML mode to output the auxiliary files needed by hhc.exe, is still
around and should still work the same as it always did. I have no real
intention of removing it, partly for the reasons stated in the manual
(someone might find it useful to have Halibut generate the .HHP file
once and then make manual adjustments to it, so that they can change
styling options that the direct CHM output doesn't permit), and mostly
because it wouldn't save a great deal of code or complexity in any
case - the big two of the three auxiliary files (the HHC and HHK) have
to be generated _anyway_ to go inside the .CHM, so all the code would
have to stay around regardless.
|
| |
|
|
|
|
|
|
| |
No functional change, but this opens the way to have further sets of
config directives that parallel the \cfg{html-foo} family but with
different prefixes in place of 'html'. The previous code to treat
prefixes 'html' and 'xhtml' the same was not general enough because it
depended on the coincidence that the former is a suffix of the latter.
|
| |
|
|
|
|
|
|
| |
The general routines for analysing a buffer into an LZ77ish stream of
literals and matches, and for constructing a Huffman tree in canonical
format, now live in their own source files so that they can be reused
for other similar compression formats. Deflate-specific details like
the exact file encoding are left in deflate.c.
|
| |
|
|
|
|
|
|
|
| |
I introduced this in commit b3db1cce3, which I hadn't really meant to
push at all yet (it's groundwork for a new feature that's still in
development), but which I had absentmindedly left lying around in my
usual checkout directory when the time came to do 84ed4f994, and which
got pushed as a side effect without having been quite fully tested
yet. Ahem.
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
| |
This isn't strictly _necessary_, since git submodules have a built-in
check that you downloaded the commit you thought you'd downloaded (in
that they specify the exact commit hash you want), so as long as the
top-level halibut repo is being checked out securely, it doesn't
matter if the submodules come over git://. However, I can't see any
reason _not_ to switch over to https, since I want to make it the new
recommended standard for anon access to all my git repositories.
|
| | |
|
| |
|
|
|
| |
A user points out that that recently added markup feature is easy to
miss because it's not as prominently documented as it should be.
|
| |
|
|
|
|
| |
Thanks to Paul Curtis for reporting that 'if (i != prev+1)' would be
undefined on the first pass through this loop, because prev was never
initialised beforehand. Initialise it to a safe value.
|
| |
|
|
|
| |
Now you can 'make install prefix=/some/previously/nonexistent/path'
and have all the necessary subdirs created for you.
|
| | |
|
| |
|
|
|
|
|
|
| |
I had no idea that was a thing HTML Help could do at all, let alone
that .chm files could enable or disable it via a flag! But Tino
Reichardt sent in a patch which sets one extra bit in a flags word in
the .hhp file, and despite 'chmspec' not mentioning that bit at all,
it does indeed seem to do something useful.
|
| |
|
|
|
|
|
|
| |
'bob -s Buildscr.test' builds Halibut, runs it over inputs/test.but to
produce all the supported output formats, and delivers them all into
the output directory. Including Windows HTML Help, for which it has to
do a special run of the HTML back end with extra options and then get
a Windows box to run hhc (hence this having to be a Buildscr).
|
| |
|
|
|
|
|
| |
As far as I can tell from the source control history, Halibut has
_never_ actually printed an error message on failure to open one of
its input files! The error message has existed all along, but was
never actually invoked. Ahem.
|
| |
|
|
|
| |
There was a missing NULL check in the code that test-opens files in
both binary and text mode (for font-handling purposes).
|
| |
|
|
|
|
| |
Turns out we can get a null pointer passed through from the front end,
if the input file was erroneous in some way, so we should do fallback
processing if so (exactly what doesn't matter much) rather than crash.
|
| |
|
|
|
| |
I often symlink it down from the build directory, so let's be
sympathetic to other people who like to type './halibut'.
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This causes sensible error reporting if distance codes 30 or 31 appear
in a compressed block.
(Not that Halibut actually _uses_ the Deflate decoder - it only uses
the encoder - but if I've got a copy of this code here then it should
be correct.)
[originally from svn r10280]
[r10278 == 3fd8014ea7235d0ec34e8f97a34f3ecf576e8239 in putty repository]
|
| |
|
|
|
|
|
|
| |
I forgot to add this in last week's versioning revamp, so that bob
builds work (constructing a version.h from the build script) but dev
builds straight from source control fail for lack of version.h.
[originally from svn r10277]
|
| |
|
|
|
|
|
|
|
|
|
|
| |
A long time ago, it seemed like a good idea to arrange that binaries
of Halibut would automatically cease to identify themselves as a
particular upstream version number if any changes were made to the
source code, so that if someone made a local tweak and distributed the
result then I wouldn't get blamed for the results. Since then I've
decided the whole idea is more trouble than it's worth, so I'm
retiring it completely.
[originally from svn r10254]
|
| |
|
|
|
|
|
|
|
|
| |
The \versionids in the docs are now added by the bob script; the one
in inputs/test.but has been replaced by fixed text (it didn't matter
what it contained anyway, of course, for test purposes), and the one
in misc/halibut.vim has simply been removed (it wasn't actually
expanded by svn anyway - it still had its old CVS value).
[originally from svn r10253]
|
| |
|
|
|
|
|
|
|
|
| |
The existing Halibut bob script defaults to building a completely
unversioned source tarball. I think building one with the version
format I'm now more or less standardising on (date + VCS id info) is a
more sensible default. So I'm retiring the SNAPSHOT setting, which I
never used anyway, and making the default work like that.
[originally from svn r10252]
|
| |
|
|
|
|
| |
Also from J. Lewis Muir.
[originally from svn r10213]
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Patch due to J. Lewis Muir, who points out that the HTML backend's
current policy of _always_ disabling the TOC in single-file mode is
excessively harsh: in a long or formal enough document, you might
still want a TOC to make navigating around within the file easier,
even if it's not necessary to use it to get between multiple files.
So this change removes the unconditional prohibition against TOCs in
single-file documents, but they're still disabled by default, because
a single file counts as a leaf file and the existing default settings
disable TOCs in leaf files anyway. So if you do want a TOC in a single
file, you can reconfigure 'html-leaf-contains-contents' to true.
[originally from svn r10212]
|
| |
|
|
| |
[originally from svn r10166]
|
| |
|
|
| |
[originally from svn r9774]
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
this a couple of times in Halibut markup recently (in particular, it's
handy to have a typographical distinction between 'this term is
emphasised because it's new' and 'this term is emphasised because I
want you to pay attention to it'), so here's an implementation,
basically parallel to \e.
One slight oddity is that strong text in headings will not be
distinguished in some output formats, since they already use bolded
text for their headings.
[originally from svn r9772]
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'm not quite sure why I ever thought it was a good idea to have a
central variadic error() function taking an integer error code
followed by some list of arguments that depend on that code. It now
seems obvious to me that it's a much more sensible idea to have a
separate function per error, so that we can check at compile time that
the arguments to each error call are of the right number and type! So
I've done that instead.
A side effect is that the errors are no longer formatted into a
fixed-size buffer before going to stderr, so I can remove all the
%.200s precautions in the format strings.
[originally from svn r9639]
|