Skip to content

modify usfm for chapter-level drafting to avoid import issues; move remarks to chapters#285

Draft
mshannon-sil wants to merge 6 commits intomainfrom
incremental_draft
Draft

modify usfm for chapter-level drafting to avoid import issues; move remarks to chapters#285
mshannon-sil wants to merge 6 commits intomainfrom
incremental_draft

Conversation

@mshannon-sil
Copy link
Copy Markdown
Collaborator

@mshannon-sil mshannon-sil commented Mar 26, 2026

This PR addresses issue #284.

Mostly looking for high-level feedback about the approach at the moment. As we were discussing, is the right place for this functionality in the get_usfm() method as essentially a post-processing step? Or should we look to implement this feature in process_tokens() (and maybe move the remark logic here as well)?

Some initial thoughts:
Pros for putting it in get_usfm():

  • The code is together in a cohesive unit making it potentially easier to maintain, rather than spread across process_token().
  • If it's just for the purposes of importing, then it can be thought of as a kind of "view" that Paratext needs to avoid import issues while the true model is kept unmodified in handler._tokens. This allows for the option to access the unmodified usfm if needed in the future.

Pros for putting it in process_token():

  • Faster execution time since it's all part of the same iteration
  • If thought of as an essential change to the usfm structure such that alternative views are unnecessary, it could make more structural sense to include it here.

This change is Reviewable

Copy link
Copy Markdown
Contributor

@ddaspit ddaspit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ddaspit reviewed 2 files and all commit messages, and made 1 comment.
Reviewable status: all files reviewed, 1 unresolved discussion (waiting on Enkidu93 and mshannon-sil).


machine/corpora/update_usfm_parser_handler.py line 345 at r1 (raw file):

        tokens = list(self._tokens)
        if chapters is not None:
            tokens = self._get_incremental_draft_tokens(tokens, chapters)

I think we can do something similar, but before we parse instead of after. Instead of calling parse_usfm in update_usfm, we can do something like this:

tokenizer = UsfmTokenizer(self._settings.stylesheet)
tokens = tokenizer.tokenize(usfm)
tokens = filter_tokens_by_chapter(tokens, chapters)
parser = UsfmParser(tokens, handler, self._settings.stylesheet, self._settings.versification)
parser.process_tokens()

This would avoid updating the whole book.

@mshannon-sil
Copy link
Copy Markdown
Collaborator Author

machine/corpora/update_usfm_parser_handler.py line 345 at r1 (raw file):

Previously, ddaspit (Damien Daspit) wrote…

I think we can do something similar, but before we parse instead of after. Instead of calling parse_usfm in update_usfm, we can do something like this:

tokenizer = UsfmTokenizer(self._settings.stylesheet)
tokens = tokenizer.tokenize(usfm)
tokens = filter_tokens_by_chapter(tokens, chapters)
parser = UsfmParser(tokens, handler, self._settings.stylesheet, self._settings.versification)
parser.process_tokens()

This would avoid updating the whole book.

I updated it accordingly, how does it look now?

If we change parse_usfm to accept a Sequence[UsfmToken] as well as str, we could call it here and let it instantiate the parser and process the tokens to avoid some code duplication. But not sure if there's a reason parse_usfm only accepts str.

Copy link
Copy Markdown
Collaborator

@Enkidu93 Enkidu93 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Enkidu93 reviewed 4 files and all commit messages, and made 2 comments.
Reviewable status: all files reviewed, 3 unresolved discussions (waiting on ddaspit and mshannon-sil).


machine/corpora/paratext_project_text_updater_base.py line 97 at r2 (raw file):

            in_id_marker = False
        elif token.type == UsfmTokenType.CHAPTER:
            if token.data and int(token.data) in chapters:

I think this may be safe now after some recent changes, but you may want to double check what happens with bad chapter references like \c 1. if you haven't already/isn't already covered by tests.


machine/corpora/update_usfm_parser_handler.py line 348 at r2 (raw file):

                remark_tokens.append(UsfmToken(UsfmTokenType.TEXT, text=remark))
            if len(tokens) > 0:
                for index, token in enumerate(tokens):

Don't we want to preserve the ability to add book-level remarks? Am I reading this correctly that we'd no longer be able to do so with this change? Peter has also put in a PR related to the per-chapter remarks; are you guys coordinating on this sillsdev/machine#408? I think if we did something like he has there, we would have a bit more flexibility.

@mshannon-sil
Copy link
Copy Markdown
Collaborator Author

machine/corpora/update_usfm_parser_handler.py line 348 at r2 (raw file):

Previously, Enkidu93 (Eli C. Lowry) wrote…

Don't we want to preserve the ability to add book-level remarks? Am I reading this correctly that we'd no longer be able to do so with this change? Peter has also put in a PR related to the per-chapter remarks; are you guys coordinating on this sillsdev/machine#408? I think if we did something like he has there, we would have a bit more flexibility.

I was unaware that Peter was making any changes. As far as I'm aware we want it working in SILNLP first (through machine.py) before changes to machine.

What I remember from the NT meeting where we discussed remarks was that the team wanted to move all the remarks to the chapter level once we realized that \rem is valid in chapters, so we would no longer have book remarks. I can throw the question to the wider group to see what we think.

Copy link
Copy Markdown
Contributor

@ddaspit ddaspit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ddaspit reviewed 4 files and all commit messages, and made 3 comments.
Reviewable status: all files reviewed, 4 unresolved discussions (waiting on Enkidu93 and mshannon-sil).


machine/corpora/update_usfm_parser_handler.py line 345 at r1 (raw file):

Previously, mshannon-sil wrote…

I updated it accordingly, how does it look now?

If we change parse_usfm to accept a Sequence[UsfmToken] as well as str, we could call it here and let it instantiate the parser and process the tokens to avoid some code duplication. But not sure if there's a reason parse_usfm only accepts str.

I think it makes sense to update parse_usfm to accept tokens as well.


machine/corpora/update_usfm_parser_handler.py line 348 at r2 (raw file):

Previously, mshannon-sil wrote…

I was unaware that Peter was making any changes. As far as I'm aware we want it working in SILNLP first (through machine.py) before changes to machine.

What I remember from the NT meeting where we discussed remarks was that the team wanted to move all the remarks to the chapter level once we realized that \rem is valid in chapters, so we would no longer have book remarks. I can throw the question to the wider group to see what we think.

The philosophy for implementing features in the Machine library is a bit different than our other applications. For applications, we only build what we need. Because Machine is a shared library, we try to keep any features we add generally useful (within reason). In this case, I think it makes sense to include the ability to add book-level remarks even if we aren't planning on using that specific feature right now.


machine/corpora/__init__.py line 30 at r2 (raw file):

from .paratext_project_settings_parser_base import ParatextProjectSettingsParserBase
from .paratext_project_terms_parser_base import KeyTerm, ParatextProjectTermsParserBase
from .paratext_project_text_updater_base import ParatextProjectTextUpdaterBase, filter_tokens_by_chapter

This seems unnecessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants