Skip to content

Bug 2035652 - Create basic scaffolding for conversational modal#239

Open
MatthewTighe wants to merge 6 commits intomozilla-firefox:autolandfrom
MatthewTighe:aip-llm-context
Open

Bug 2035652 - Create basic scaffolding for conversational modal#239
MatthewTighe wants to merge 6 commits intomozilla-firefox:autolandfrom
MatthewTighe:aip-llm-context

Conversation

@MatthewTighe
Copy link
Copy Markdown
Contributor

There's a bit of a problem with the keyboard pushing the text up, but I ran out of time and that feels pretty nonessential until we get further down the road.

image

@github-actions
Copy link
Copy Markdown
Contributor

Warning

The base branch is currently set to main. Please Edit this PR and set the base to autoland.

@github-actions
Copy link
Copy Markdown
Contributor

View this pull request in Lando to land it once approved.

@MatthewTighe
Copy link
Copy Markdown
Contributor Author

@MatthewTighe MatthewTighe changed the base branch from main to autoland April 30, 2026 00:45
@MatthewTighe MatthewTighe force-pushed the aip-llm-context branch 3 times, most recently from 08eaf19 to 263fc0d Compare April 30, 2026 22:08
Copy link
Copy Markdown
Contributor

@segunfamisa segunfamisa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, the changes make sense to me. I had a couple of questions to help me understand the work stream.

I approved it, but I'd take that lightly, since I am just building context on the work.

) {
next(action)
if (action is UserMessageSubmitted) {
scope.launch { sendMessage(store, action.text) }
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we trim the text?

* @param rawMarkdown The accumulated raw markdown text received so far.
* @param richDocument The incrementally-parsed [RichDocument] for display.
*/
data class PartialResponse(val rawMarkdown: String, val richDocument: RichDocument)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if it's going to be helpful to have a way to deserialize RichDocument to text, whenever we need to, instead of keeping two copies of roughly the same content in memory.

is ResponseFailed -> when (state) {
is AskPageState.Waiting -> AskPageState.Ready(messages = state.messages, hasError = true)
is AskPageState.Receiving -> AskPageState.Ready(messages = state.messages, hasError = true)
else -> state
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a situation where the previous LLM call fails, and we don't want to set the ready state and flag that it has an error?


if (isUnavailable(provider)) return false

if (config.preparationStrategy == LlmPreparationStrategy.ActiveOnly) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this block could benefit from some comments. Do I understand it correctly that, if we have a strategy to only prepare the active one, when the user switches, we have to prepare this?


Since the preparation strategy does not sound like something that would change in the course of a session, if a user switches providers back and forth, it seems like we would always attempt to prepare the new provider.


Would it be helpful, instead, if we checked whether the provider is prepared or not, and only then, do we prepare it?

Comment on lines +94 to +97
_activeProvider
.flatMapLatest { provider -> flow { emit(awaitLlm(provider)) } }
.filterNotNull()
.first()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I really understand what this block is doing.

Am I reading this correctly: If we are not able to get an LLM from the activeProvider, we try again with the same active provider?

Is it a retry mechanism?

Reading the idea of a failover, I imagined it would iterate over the list of providers in the configuration and then pick the first successful one?

import mozilla.components.concept.llm.LlmSessionConfig
import mozilla.components.concept.llm.LocalLlmProvider

internal class LlmSessionImpl(private val config: LlmSessionConfig) : LlmSession {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could add some tests for this class. Idk what the plans for this implementation are beyond the prototype. Do we plan to revisit to add tests and all?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants