The fastest way to revise and publish research.
Enjamb brings your team, comments, and manuscript into one intelligent platform with AI-native workflow automation.













Trusted by researchers at leading institutions
Built by researchers, for researchers
The Word experience, built for research
















Migrate instantly. Import your manuscript and start collaborating in seconds. Our editor feels just like Word and, most importantly, keeps 100% of your .docx formatting perfectly intact.
Migrate instantly. Import your manuscript and start collaborating in seconds. Our editor feels just like Word and, most importantly, keeps 100% of your .docx formatting perfectly intact.
Anaphora—our state-of-the-art model for academic writing.
Ask Anaphora…
Sure! I found 3 relevant papers in PubMed and 2 in ResearchGate…
Help me address comment #4

Ask Anaphora…
Sure! I found 3 relevant papers in PubMed and 2 in ResearchGate…
Help me address comment #4

Tap into real-time intelligence across 100M+ sources to resolve comments, manage tasks, and find citations.
Tap into real-time intelligence across 100M+ sources to resolve comments, manage tasks, and find citations.
Intelligent Merge
Collaborate without fear. Anaphora understands what you're writing, not just where. It intelligently combines all co-author edits automatically, eliminating conflicts and broken documents.
Journal Templates
Submit to any journal with ease. Use one of our templates to auto-format your manuscript to fit journal guidelines.
Built by researchers, for researchers
The Word experience, built for research








Migrate instantly. Import your manuscript and start collaborating in seconds. Our editor feels just like Word and, most importantly, keeps 100% of your .docx formatting perfectly intact.
Anaphora—our state-of-the-art model for academic writing.
Ask Anaphora…
Sure! I found 3 relevant papers in PubMed and 2 in ResearchGate…
Help me address comment #4

Tap into real-time intelligence across 100M+ sources to resolve comments, manage tasks, and find citations.
Intelligent Merge
Collaborate without fear. Anaphora understands what you're writing, not just where. It intelligently combines all co-author edits automatically, eliminating conflicts and broken documents.
Journal Templates
Submit to any journal with ease. Use one of our templates to auto-format your manuscript to fit journal guidelines.
Ditch the clutter
The entire research process in one window
Write your manuscript, manage tasks, view comments, and write rebuttals - all in one place.








Comments you upload are automatically converted into actionable tasks and assigned to collaborators based on their skills & role
Confirm the formula for the position-wise feed-forward network
Expand the section on positional encodings to provide more intuition behind the choice of sine and cosine functions
Add a brief note on why d_k and d_v were set to 642
Verify the O(1) sequential operations claim for self-attention in Table 11

Jon Porchet
Confirm the formula for the position-wise feed-forward network
Expand the section on positional encodings to provide more intuition behind the choice of sine and cosine functions
Add a brief note on why d_k and d_v were set to 642
Verify the O(1) sequential operations claim for self-attention in Table 11

Jon Porchet
Verify the 3.5-day training time for the "big" model.
Confirm the P_drop rate of 0.1 for the base model.
Explain intuition behind the learning rate formula & clarify the purpose of warmup_steps before the inverse square root decay begins.
Check the WMT 2014 EN-DE training set size (4.5M pairs).

Saimon Kurō
Verify the 3.5-day training time for the "big" model.
Confirm the P_drop rate of 0.1 for the base model.
Explain intuition behind the learning rate formula & clarify the purpose of warmup_steps before the inverse square root decay begins.
Check the WMT 2014 EN-DE training set size (4.5M pairs).

Saimon Kurō
Clarify the purpose of the decoder's self-attention maskin and explicitly state that the mask prevents positions from attending to subsequent positions.
Note the use of byte-pair encoding for the EN-DE task.
Confirm the parsing results for the WSJ-only 4-layer model.
Check the beam size and length penalty (alpha=0.6) used for translation.

Parbir Pattabiram
Clarify the purpose of the decoder's self-attention maskin and explicitly state that the mask prevents positions from attending to subsequent positions.
Note the use of byte-pair encoding for the EN-DE task.
Confirm the parsing results for the WSJ-only 4-layer model.
Check the beam size and length penalty (alpha=0.6) used for translation.

Parbir Pattabiram
in seconds.
To a clean task-oriented workspace
Go from unorganized comments
Comments you upload are automatically converted into actionable tasks and assigned to collaborators based on their skills & role
Confirm the formula for the position-wise feed-forward network
Expand the section on positional encodings to provide more intuition behind the choice of sine and cosine functions
Add a brief note on why d_k and d_v were set to 642
Verify the O(1) sequential operations claim for self-attention in Table 11

Jon Porchet
Verify the 3.5-day training time for the "big" model.
Confirm the P_drop rate of 0.1 for the base model.
Explain intuition behind the learning rate formula & clarify the purpose of warmup_steps before the inverse square root decay begins.
Check the WMT 2014 EN-DE training set size (4.5M pairs).

Saimon Kurō
Clarify the purpose of the decoder's self-attention maskin and explicitly state that the mask prevents positions from attending to subsequent positions.
Note the use of byte-pair encoding for the EN-DE task.
Confirm the parsing results for the WSJ-only 4-layer model.
Check the beam size and length penalty (alpha=0.6) used for translation.

Parbir Pattabiram
in seconds.
To a clean task-oriented workspace
Go from unorganized comments
Instant backlinking
Find exactly where in your manuscript a task or comment is referring to.
Instantly.
Backlink
Instant backlinking
Find exactly where in your manuscript a task or comment is referring to.
Instantly.
Backlink
Instant backlinking
Find exactly where in your manuscript a task or comment is referring to.
Instantly.
Backlink
Instant backlinking
Find exactly where in your manuscript a task or comment is referring to.
Instantly.
Backlink
Instant backlinking
Find exactly where in your manuscript a task or comment is referring to.
Instantly.
Backlink
Instant backlinking
Find exactly where in your manuscript a task or comment is referring to.
Instantly.
Backlink
Add a brief note on why $d_k$ and $d_v$ were set to 642.












