Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 62 additions & 0 deletions .github/workflows/static.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
name: Deploy site to Pages

on:
# Runs on pushes targeting the `main` branch. Change this to `master` if you're
# using the `master` branch as the default branch.
push:
branches: [main]

# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:

# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write

# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: pages
cancel-in-progress: false

jobs:
# Build job
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
# - uses: pnpm/action-setup@v3 # Uncomment this block if you're using pnpm
# with:
# version: 9 # Not needed if you've set "packageManager" in package.json
# - uses: oven-sh/setup-bun@v1 # Uncomment this if you're using Bun
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: 22
cache: npm # or pnpm / yarn
- name: Setup Pages
uses: actions/configure-pages@v4
- name: Install dependencies
run: npm ci # or pnpm install / yarn install / bun install
- name: Build with VitePress
run: npm run docs:build # or pnpm docs:build / yarn docs:build / bun run docs:build
- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
path: docs/.vitepress/dist

# Deployment job
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
needs: build
runs-on: ubuntu-latest
name: Deploy
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -91,3 +91,7 @@ fabric.properties

# Visual Studio Files
.vs

# Vitepress
docs/.vitepress/dist
docs/.vitepress/cache
63 changes: 63 additions & 0 deletions docs/.vitepress/config.mts
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
import {defineConfig} from 'vitepress'

// https://vitepress.dev/reference/site-config
export default defineConfig({
title: "PhenX EFCore BulkInsert",
description: "Super fast bulk insert for EF Core",
themeConfig: {
outline: "deep",
search: {
provider: 'local'
},

// https://vitepress.dev/reference/default-theme-config
nav: [
{text: 'Home', link: '/'},
{text: 'Documentation', link: '/documentation'},
],

sidebar: [
{
text: 'Getting started',
items: [
{text: 'Installation', link: '/getting-started#installation'},
{text: 'Usage', link: '/getting-started#usage'},
]
},
{
text: 'Documentation',
link: '/documentation'
},
{
text: 'Limitations',
link: '/limitations'
},
],

editLink: {
pattern: 'https://github.com/PhenX/PhenX.EntityFrameworkCore.BulkInsert/edit/main/README.md/edit/main/docs/:path',
text: 'Edit this page on GitHub'
},

lastUpdated: {
text: 'Updated at',
formatOptions: {
dateStyle: 'full',
timeStyle: 'medium'
}
},

socialLinks: [
{
icon: 'github', link: 'https://github.com/PhenX/PhenX.EntityFrameworkCore.BulkInsert',
}
],

externalLinkIcon: true,

footer: {
message: 'Released under the MIT License.',
copyright: 'Copyright © 2025-present Fabien Ménager'
}
}
})
206 changes: 206 additions & 0 deletions docs/documentation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,206 @@
# Configure the DbContext

Register the bulk insert provider in your `DbContextOptions`:

```csharp{6,8,10,12,14}
services.AddDbContext<MyDbContext>(options =>
{
options
// .UseSqlServer(connectionString) // or UseNpgsql or UseSqlite, as appropriate

.UseBulkInsertPostgreSql()
// OR
.UseBulkInsertSqlServer()
// OR
.UseBulkInsertSqlite()
// OR
.UseBulkInsertMySql()
// OR
.UseBulkInsertOracle()
;
});
```

# Insert methods

There are two groups of methods for inserting data into the database:

* `ExecuteBulkInsert` - inserts the entities as fast as possible, without returning the inserted entities. This is suitable for scenarios where you don't need to access the inserted data immediately.
* `ExecuteBulkInsertReturnEntities` - inserts the entities and returns the inserted entities. This is useful when you need to access the inserted data right after the insertion, but it's slower because it requires creating an intermediate temporary table.

Each method has an asynchronous version (`ExecuteBulkInsertAsync` and `ExecuteBulkInsertReturnEntitiesAsync`).

These methods all take the same parameters:

* `IEnumerable<T>` - the collection of entities to insert.
* `Action<BulkInsertOptions<T>>` - an optional action to configure the bulk insert options, such as batch size, timeout, etc.
* `OnConflictOptions<T>` - an optional action to configure conflict resolution options, such as ignoring conflicts or updating existing records.
* `CancellationToken` - an optional cancellation token to cancel the operation, only for the asynchronous methods.

### Basic usage

```csharp
// Asynchronously
await dbContext.ExecuteBulkInsertAsync(entities);

// Or synchronously
dbContext.ExecuteBulkInsert(entities);
```

### Bulk insert with options

```csharp
// Common options
await dbContext.ExecuteBulkInsertAsync(entities, options =>
{
options.BatchSize = 1000; // Set the batch size for the insert operation, the default value is different for each provider
});

// Provider specific options, when available, example for SQL Server
await dbContext.ExecuteBulkInsertAsync(entities, (SqlServerBulkInsertOptions o) => // <<< here specify the SQL Server options class
{
options.EnableStreaming = true; // Enable streaming for SQL Server
});

// Provider specific options, supporting multiple providers
await dbContext.ExecuteBulkInsertAsync(entities, o =>
{
o.MoveRows = true;

if (o is SqlServerBulkInsertOptions sqlServerOptions)
{
sqlServerOptions.EnableStreaming = true;
}
else if (o is MySqlBulkInsertOptions mysqlOptions)
{
mysqlOptions.BatchSize = 1000;
}
});
```

### Returning inserted entities

```csharp
await dbContext.ExecuteBulkInsertReturnEntitiesAsync(entities);
```

### Conflict resolution / merge / upsert

Conflict resolution works by specifying columns that should be used to detect conflicts and the action to take when
a conflict is detected (e.g., update existing rows), using the `onConflict` parameter.

* The conflicting columns are specified with the `Match` property and must have a unique constraint in the database.
* The action to take when a conflict is detected is specified with the `Update` property. If not specified, the default action is to do nothing (i.e., skip the conflicting rows).
* You can also specify the condition for the update action using either the `Where` or the `RawWhere` property. If not specified, the update action will be applied to all conflicting rows.

```csharp
await dbContext.ExecuteBulkInsertAsync(entities, onConflict: new OnConflictOptions<TestEntity>
{
Match = e => new
{
e.Name,
// ...other columns to match on
},

// Optional: specify the update action, if not specified, the default action is to do nothing
// Excluded is the row being inserted which is in conflict, and Inserted is the row already in the database.
Update = (inserted, excluded) => new TestEntity
{
Price = inserted.Price // Update the Price column with the new value
},

// Optional: specify the condition for the update action
// Excluded is the row being inserted which is in conflict, and Inserted is the row already in the database.
// Using raw SQL condition
RawWhere = (insertedTable, excludedTable) => $"{excludedTable}.some_price > {insertedTable}.some_price",

// OR using a lambda expression
Where = (inserted, excluded) => excluded.Price > inserted.Price,
});
```

## Options

The default values for each provider are shown in the table below.

You can override these defaults by passing an action to the `ExecuteBulkInsert` or `ExecuteBulkInsertReturnEntities` methods.

### BatchSize

* Type: `int`
* Default:
* SQL Server: `50,000`
* PostgreSQL: N/A (uses native bulk insert)
* SQLite: `5` (INSERT statement with multiple values)
* MySQL: N/A (uses native bulk insert)
* Oracle: `50,000`

The number of rows to insert in each batch.

### CopyTimeout

* Type: `TimeSpan`
* Default: `10 minutes`

The timeout for the bulk insert operation.

### CopyGeneratedColumns
* Type: `bool`
* Default: `false`

Copy computed/generated columns

### MoveRows
* Type: `bool`
* Default: `false` (PostgreSQL only)

Move rows between tables (PostgreSQL only), only applies when returning entities.

### SRID

* Type: `int`
* Default: `4326`

Sets the ID of the Spatial Reference System used by the Geometries to be inserted.

### NotifyProgressAfter

* Type: `int`
* Default: `unset`

Notify after X rows are copied. This is useful for tracking progress in long-running operations.

### OnProgress

* Type: `Action<int>`
* Default: `unset`

Callback for progress reporting. This is called with the number of rows copied so far.

### Converters

* Type: `IEnumerable<IValueConverter>`
* Default: `[GeometryConverter]` (SQL Server and PostgreSQL only)

List of value converters for custom types, such as spatial types.

### CopyOptions

* Type: `Enum`
* Default: `Default` (SQL Server and Oracle only)

Provider-specific copy/bulk options (`SqlBulkCopyOptions` for SQL Server, `OracleBulkCopyOptions` for Oracle, etc)

### EnableStreaming

* Type: `bool`
* Default: `false` (SQL Server only)

Enable streaming bulk copy for SQL Server

### TypeProviders

* Type: `IEnumerable<ITypeProvider>`
* Default: `unset` (PostgreSQL only)

Custom PostgreSQL type providers for handling specific data types.
Loading