Robert O'Brien 68 min

Finding the Right Tool for the Job: CE Tools that Improve Efficiency and Save Time


Watch this webinar to learn about in-depth evaluations of CE analysis tools designed to minimize hands-on time and time spent on routine data review, enabling more complex sample analysis.



0:00

Hello and welcome to Finding the Right tool for the job, CE Tools that improve

0:05

efficiency and save time.

0:07

This is the third webinar in the sixth part Future Trends in Forensic DNA

0:11

Technology series.

0:13

My name is Michelle Taylor, Editor-in-Chief of Forensic, and I will be your

0:16

moderator throughout.

0:18

For today's webinar, you can earn one hour of continuing education credit.

0:22

Following the conclusion of the webinar, you will receive an email with

0:25

information on how to obtain CE credit documentation.

0:29

We have a great line-up scheduled to present to you today, but before we begin,

0:33

I'd like to take just a moment to cover a few logistics.

0:36

At the end of the presentations, we will hold a question and answer session.

0:41

To ask a question, click on the Ask a Question tab in the upper right corner of

0:45

your screen.

0:46

Please also take note that the right side of the screen features an overview of

0:49

today's webinar, as well as more information about our speakers.

0:53

If you have a technical question during today's event, click on the "Test Your

0:56

Connection" button at the bottom of your screen.

0:59

From there, you can access additional webinar support.

1:02

We also invite you to use the social media widgets beneath the webinar to share

1:06

with your friends and colleagues.

1:08

Today, you will be hearing from Jamie Brockhold, senior technical application

1:13

scientist at Thermo Fisher Scientific.

1:16

In her current role, she provides trouble shooting and technical support to

1:20

friends at laboratories globally

1:22

and offers subject matter expertise, as new products are introduced into the

1:26

human identification portfolio.

1:28

Back to joining Thermo Fisher Scientific, Brockhold earned an MS in Forensic

1:32

Science with a concentration in Criminalistics

1:35

from the University of New Haven and went on to work in the Forensic Biology

1:39

Unit at the Massachusetts State Police Crime Laboratory.

1:43

You will also hear from Robert O'Brien, Forensic Biology Section Lead at

1:47

Florida International University.

1:50

O'Brien evaluates equipment and techniques used in biological collection and

1:54

DNA analysis.

1:56

His technology evaluations have been published as reports, presentations, and

2:00

scientific posters at industry events, and as reference publications.

2:05

He also advises operators on tests and techniques available for field use in

2:09

biological sample detection and screening.

2:12

O'Brien develops curricula for forensic DNA and serology testing programs,

2:17

delivering instruction both in person and remotely, something that's very

2:21

important right now.

2:23

Thank you for joining us for our third session in the 6th part Future Trends in

2:27

Forensic DNA Technology webinar series.

2:30

After the webinar, please be sure to check your email for more information on

2:33

CE Credit Documentation.

2:35

We look forward to seeing you on August 20th for part 4, tips, tricks, and best

2:39

practices for gaining efficiency in your Forensic DNA Laboratory.

2:43

Without further ado, I'm going to hand it off to Jamie to get us started.

2:47

Thank you so much for that introduction, Michelle.

2:50

My presentation today is focusing on new features in our latest CE Data

2:54

Collection software programs that will help improve efficiency and save time.

2:59

We're going to be discussing enhancements in the 3500 Data Collection version 4

3:04

.0.1, and also new features of the SEEK Studio instrument and SEEK Studio Data

3:09

Collection version 1.2.1.

3:12

Here's a quick overview of what the presentation will cover.

3:17

We'll discuss at a high level the new features of the latest software versions

3:21

on each of the CE instruments.

3:23

Unfortunately, we don't have time to go into great detail on all of the new

3:26

features, but we will get to drill down into the pull-up reduction aspects,

3:31

which are similar, but do have slight nuances for each instrument.

3:34

After reviewing the technicalities behind pull-up reduction, we'll jump into an

3:38

STR Kit comparison where we can get paired pull-up and data analysis across

3:42

five STR kits, two from applied biosystems, and three from other manufacturers.

3:49

We'll get started talking about the new enhancements in the 3500 Data

3:54

Collection software version 4.0.1.

3:58

After taking in a lot of customer feedback, since we first released the 3500

4:02

with Data Collection, we were able to include some great benefits with this

4:07

newest version.

4:09

As it relates to improved data interpretation, we did introduce an algorithm

4:13

that will reduce pull-up peaks for applied biosystems chemistry, and this is

4:17

what's going to be covered in a lot more detail in the upcoming slides.

4:22

We also optimized signal across the capillaries, and we did this in two ways.

4:27

In 24-cap systems, we introduced a spatial optimization to account for optical

4:33

variation across the 24 capillaries, and on both the 24 and 8-cap instruments,

4:40

we have a recommendation for a Z offset or a higher position in the well when

4:46

the capillaries go into the well for injection, and that can be performed by an

4:51

engineer

4:53

or a routine PM, or when they first installed the software by doing an auto sam

4:58

pler calibration.

5:00

We also introduced an algorithm for reducing off-scale data for data-vacing

5:04

labs, or those labs running robust single-source reference samples.

5:10

Off-scale data recovery, what it does is it lifts the cap off the top end of

5:15

the dynamic range, so right now where your data is capped or off-scale, around

5:21

32,000, if you're running with off-scale data recovery, that cap is lifted up

5:26

to about 65,000.

5:30

Also, you will see pull-up related to off-scale peaks reduced. You do see a

5:35

little asterisk here that mentions GMapper IDX version 1.6 for full

5:39

functionality of the off-scale recovery feature, and that is because it's only

5:45

in version 1.6 that you'll see that cap lifted up to 65,000.

5:50

If you analyze these HID files in earlier versions of IDX, you will see reduced

5:56

pull-up. You will see reduced pull-up also as it relates to the off-scale data,

6:00

but you won't see that cap lifted up to 65,000.

6:04

You'll still get the off-scale indicator when peaks reach about 32,000.

6:10

We also aimed to improve the user experience, and a lot of feature requests

6:14

went into these enhancements. The first thing we did was we added a plate

6:18

loading flexibility, where you can pause a run, bring the auto sampler forward,

6:24

add a second plate, and restart the run.

6:27

You also, if you have two plates on, could pause a run after the first plate is

6:31

completed, for example, pull the auto sampler forward, take the first plate off

6:36

, put a third plate on, and start the same run, and that's continual.

6:41

We also added a six-die installation standard, so on earlier versions of

6:45

software, the installation standard for the HID performance check was done

6:49

running Identifyler Ladder.

6:52

You can now have the option to run a five-die with Identifyler Ladder, same as

6:56

before, or a new six-die with Global Filer Ladder performance check.

7:02

We included, like I mentioned, other feature requests, things like being able

7:06

to export the injection list, have a consumable log that is very easy to read

7:11

to see when consumables were taken off or put on the instrument, and by whom.

7:16

Those are a couple other ones, but there is a full list of all the new features

7:21

in the data collection, usable item.

7:25

This data collection version, 4.0.1, is Win10 supported.

7:33

Talking a little bit about the Seek Studio, which is our newest CE instrument.

7:37

For those that don't know, the Seek Studio is a 4-cap CE instrument.

7:42

A couple of ways that we are going to save time with this instrument is there

7:46

is an automatic optical alignment, meaning that when the capillary is installed

7:51

on the instrument, the spatial is automatic.

7:55

So no need to run that manually.

7:57

There is also an automatic spectral calibration adjustment, or also called in

8:01

the literature auto calibration, and we are going to be talking more about that

8:06

feature.

8:07

But basically, you only need to run a manual spectral.

8:11

You have to only set up a spectral one time for each die set on the instrument.

8:15

And after that, you don't have to run a manual spectral calibration again.

8:21

You can see that this instrument is pretty small.

8:23

The dimensions are there.

8:25

So a nice easy to lift, easy to move instrument, fit on a small bench top space

8:30

, if that is concerned in your lab.

8:34

The data collection software is run right on the instrument itself.

8:38

There is a touch screen that you do everything on.

8:41

You set up your plates.

8:42

You can monitor your run.

8:43

You view results.

8:44

So there is no need to have a computer attached to this instrument at all.

8:49

The consumables, we have an all-in-one reagent cartridge, which you can see

8:53

sitting in front of the instrument in the picture here.

8:57

Everything but your cathode buffer is in this easy to handle cartridge.

9:03

Your array is here.

9:05

Your polymer is here.

9:07

And your polymer sits in a little insulator that when the cartridge is put into

9:11

the instrument stays chilled,

9:14

which gives the polymer a six-month on instrument storage lifetime.

9:20

Your anode buffer reservoir is in the cartridge and your polymer delivery

9:25

system.

9:26

The cartridge is RFID tagged as is the cathode buffer container, which will sit

9:31

on the auto sampler.

9:33

There is also no more plate sandwich.

9:36

The CE plate sits right on the auto sampler, and then there is just a lid on

9:41

the auto sampler that closes down over the septa that is on your plate.

9:46

GMIDX version 1.6 is the IDX version that can read the .fsa files coming from

9:53

the seek studio data collection software.

9:57

So no longer .hid files on this instrument and software.

10:01

And then there is also some adjunct software, which is optional to use.

10:05

You will get an SAE module that you can use to set up different users and

10:11

different permissions within the software and on the instrument.

10:14

And then there is also a seek studio plate manager, which can be used to help

10:19

set up your plate maps that can be imported into the instrument.

10:24

I should also mention it is a single polymer type.

10:28

It is a pop one polymer for sequencing general fragment analysis and STR

10:34

fragment analysis.

10:37

So only pop one. So if you are running different applications on the seek

10:44

studio, you don't have to worry about changing your polymer out.

10:45

It is a 28 centimeter capillary array. I don't think I mentioned that either.

10:54

All right. So let's get into the pull up reduction features that are included

10:58

with these data collection versions.

11:00

So just to make sure we are all on the same page, we will do a quick overview

11:04

of pull up. We will talk about definitions and expectations first.

11:08

So generally we expect that pull up peaks are going to be within the range of 1

11:12

to 3 percent of the parent peaks that they fall underneath.

11:16

With off-scale data, we do know that these values could be slightly higher.

11:21

So you may see pull up above that 3 percent more often when you are talking

11:25

about an off-scale sample.

11:27

When we talk about pull up, there are really two different types of pull up.

11:31

There is spectral pull up and there is instrument specific pull up.

11:35

Spectral pull up occurs because when we run our manual calibration, we are

11:40

using pure dyes.

11:42

The matrix for deconvolution that is generated from that pure dye calibration

11:49

then gets applied to our samples.

11:51

In our samples, our dyes are attached to DNA fragments. That could affect the

11:57

spectra differently.

11:59

The difference between the spectra from the dyes being attached to the samples

12:05

and the dyes being pure dyes, that difference is what causes the error that

12:11

causes spectral pull up.

12:13

In the raw data, spectral pull up, you can see a pull up peak that sits

12:17

directly centered under the parent peak and the same is true in the analyzed

12:22

data.

12:23

When we talk about instrument specific pull up, it is a little bit different.

12:29

Instrument specific pull up all has to do with the way that the fluorescent

12:32

molecules move across the detection window.

12:35

As the fluorescent tagged fragments move across the detection window, center in

12:40

the window and move out of the window, the spectrum shifts on the CCD.

12:46

What you end up seeing in the raw data is a sinusoidal shape underneath the

12:50

parent peak.

12:52

In the analyzed data, rather than seeing the pull up peak centered under the

12:56

parent peak, it will be shifted.

12:59

If there is instrument specific pull up and spectra pull up in the same dye

13:04

channel at the same time, they would be additive.

13:08

You would see maybe a single pull up peak, but the percentage might be slightly

13:15

higher.

13:17

What does the pull up reduction algorithm do on the 3500 in particular? This is

13:22

pull up reduction on the 3500 data collection version 4.0.1.

13:28

What the algorithm does is it allows the software to use sample specific

13:32

spectral data for AB dye sets.

13:35

That is chemistry using G5, J6 or J6T.

13:40

Rather than using a matrix for deconvolution based on the manual spectral that

13:45

was run, that pure dye spectral, the algorithm uses data from the samples

13:51

themselves.

13:53

It is a more accurate deconvolution for those samples.

13:58

If, for some reason, there is limited data in a particular sample.

14:05

If you think about a low level sample where maybe there is only a few peaks,

14:09

the algorithm is going to step back and say,

14:11

"Hey, there is not enough data here for me to generate a spectral that is any

14:16

better than the manual spectral that is stored here."

14:20

Rather than trying to do something with limited data, just revert back to the

14:24

manual calibration.

14:26

Even with samples that don't have a lower quality or less data, there still

14:33

will be pull up reduction.

14:35

It will just be our traditional pull up reduction based off of the manual

14:39

spectral.

14:40

But when there is good quality data available and the algorithm is able to

14:45

generate a sample specific matrix, it will.

14:49

All pull up reduction happens prior to the creation of the HID files, and that

14:54

includes when a sample specific data is being used.

14:59

So any raw data that you see in IDX will already have the reduction applied.

15:06

Now when we talk about pull up reduction on the seek studio, it's somewhat

15:10

similar and somewhat different.

15:12

So there are really two levels of pull up reduction on the seek studio. The

15:18

first level is called auto spectral or auto calibration,

15:19

and it is similar to what we just talked about with the 3500 pull up reduction

15:24

feature.

15:25

It is going to use sample specific spectral data, but it's going to do this for

15:29

any die set.

15:30

So it doesn't just have to be an AB chemistry.

15:33

Any chemistry will have the benefit of the auto spectral.

15:38

Also similar to the 3500, if there is limited sample specific data, the

15:44

algorithm will still apply some level of pull up reduction.

15:49

It's a little bit different.

15:51

If a previous auto calibration matrix has been run, so you've had at least one

15:58

injection in a capillary with a particular die set that has generated an auto

16:04

calibration,

16:05

the software will revert back to that auto calibration.

16:09

If there has never been an auto calibration in that capillary for that die set,

16:15

then the software will go back to the manual calibration that was run.

16:20

The second level of pull up reduction on the seek studio is called Marketer

16:23

Marker Correction, and this is for applied biosystem die sets only, G5, J6, and

16:30

J6T.

16:31

Here we have taken into account Marketer Marker variation as it relates to pull

16:36

up across the read region of the chemistry.

16:40

And this software will apply an optimized correction factor for each marker in

16:46

each kit.

16:47

And the way that you enable Marketer Marker Correction is on plate setup, you

16:52

'll select the chemistry that you're running.

16:55

It's also similar to the 3500, and that all pull up reduction takes place prior

17:00

to the creation of the FSA files, so any data that you view at IDX will already

17:07

have pull up reduction applied.

17:12

Moving on to the STR Kit comparison.

17:15

So this study that we did, we had some general conditions for the study that I

17:19

'll cover here.

17:20

We also used the same sample types for each chemistry and across the different

17:25

instruments that we ran.

17:27

So all amplification done on a proflex PCR system.

17:31

Analysis, all done in gene mapper IDX version 1.6.

17:36

We also analyzed using minimum peak amplitude thresholds, which I'll discuss in

17:41

the upcoming slides.

17:44

Otherwise we followed manufacturers recommendations and parameters for each kit

17:49

as it related to amplification and CE injection.

17:53

You can see the kits listed in the blue table.

17:57

So Global Filer and NGM, Global Filer IQC and NGM Detect were the two applied

18:02

biosystems kits.

18:04

Promega Fusion 6C and Promega PowerPlex ESI 17 FAST were two kits from Promega

18:11

that we ran, and the KIAJIN investigator 24plex QS kit was the kit from KIAJIN.

18:18

So five kits in total.

18:20

You can see the different cycles that are recommended, which is how we

18:23

performed amplification and the different DNA inputs, either half an anagram or

18:28

one anagram, depending on the kit.

18:32

We ran what we call NP or non-probative samples.

18:36

So we picked four samples that could be commonly encountered by a laboratory.

18:42

So the first was a swap from a cell phone, the next a cigarette butt, swap from

18:46

a baseball hat and blood on cotton.

18:49

We also ran the kit positive control.

18:52

We utilized 1.3500 XL with data collection 4.0.1 and two different seek studio

18:58

instruments with data collection version 1.2.1.

19:06

So I mentioned that we used minimum thresholds for our analysis.

19:10

And the way that we calculated a minimum threshold was we analyzed non-template

19:15

control data for each of the kits on each instrument at one RFU.

19:21

We went into the sample and edited out any pull up.

19:26

So that could have been spikes, pull up from the size standard, or any other

19:30

artifacts.

19:31

So if there was a die blob or any raised baseline areas, that all got edited

19:36

out.

19:37

Once we were left with all the non-artifactual data, we did some calculations

19:45

to get to the limit of quantification.

19:47

So we take the average peak height of all those peaks at 1 RFU plus 10 standard

19:52

deviations, and that gets us to our LOQ.

19:55

And this is the RFU value where you'd expect nearly all your noise to fall

19:59

below.

20:00

We took that LOQ and rounded it to the nearest 5, and that was the minimum

20:04

threshold.

20:05

So for each die channel, we had a different minimum threshold.

20:09

If your LOQ was 22, we would have rounded down to 20 for the minimum threshold.

20:16

On the 3500 instruments, across the 5 different chemistries, the minimum

20:22

thresholds ranged from 35 to 85 across the die channels.

20:26

That's omitting orange, that's omitting the size standard.

20:30

On the seek studio, if you recall, I mentioned we ran two different instruments

20:34

, so the range is a little wider.

20:37

The range was from 25 to 140 RFU across the die channels, across all the chem

20:44

istries, for the two different instruments.

20:47

Okay, I just want to -- I use this term "complex spectral artifacts" when I

20:55

start talking about the results of the study,

20:57

so I just want to make sure everyone's on the same page with what I mean.

21:01

So traditional pull-up, that's pull-up that I'm going to be talking about that

21:05

is pull-up that falls underneath a parent peak.

21:09

It's very easy to calculate a pull-up percentage, so like the top figure that

21:13

you see here.

21:14

When I talk about complex spectral artifacts, these are pull-up peaks that it's

21:19

more difficult to figure out maybe what the parent --

21:23

which parent peak is related to the pull-up peak, or it can be like what you

21:28

see in this figure here,

21:30

a bridging effect, or what some people might call a pull-down effect, where you

21:35

have -- we can count it as a spectral artifact if these peaks were called,

21:41

these red peaks,

21:42

but I didn't calculate any sort of pull-up percentage to a parent peak here.

21:50

So looking at the results on our 3500 for the pull-up assessment, applied bios

21:54

ystems kits, as a reminder,

21:57

we'll have pull-up reduction enabled, so the use of sample-specific spectral

22:01

data.

22:02

The non-applied biosystems kits have pull-up reduction disabled, so that would

22:07

just be using the traditional spectral calibration with those particular die

22:12

sets.

22:13

We had a positive control and four non-probative samples for this analysis.

22:19

So in the chart here, you can see -- there's a left side and a right side. To

22:24

the far right, you can see the count of that with those complex pull-up peaks.

22:28

So any pull-up peak that was counted but a percentage wasn't calculated for.

22:33

And then on the left side, you can see the more traditional pull-up peaks

22:39

counted and then the percentages that were calculated.

22:43

So overall, you can see if you look at the average percent of the parent peak,

22:49

which is in the third column here, this is all below 3%.

22:54

So for all five kits, the average pull-up was below 3%.

23:00

You can see that in the non-applied biosystems kits, there was some pull-up

23:05

above 3% and even some pull-up above 5% in the two promega kits.

23:12

And I just have a different way to look at this on the next slide since this is

23:17

a lot of numbers to look at quickly.

23:20

Here, what you see across the x-axis are the five different kits, across the y

23:25

-- up the y-axis, I should say, or the number of pull-up peaks.

23:29

So that's traditional and complex counts of pull-up combined.

23:34

The different colors in the bars themselves, the red indicates the count of

23:39

complex pull-up peaks.

23:41

The lightest blue is pull-up that was above 5%.

23:45

The next shade of blue, pull-up below 5% and the purple pull-up that was below

23:52

3%.

23:53

So what you can see overall is that the non-applied biosystems kits have at

23:58

least three times the number of pull-up related artifacts

24:02

compared to global filer IQC and NGM detect.

24:06

I wanted to put this into a relatable -- a relatable way to look at if you were

24:14

an analyst in the lab.

24:18

So I wanted to consider time-saving.

24:20

So what does a three times reduction in the number of pull-up peaks look like

24:25

to someone who's taking the time to analyze the data?

24:29

So I had to start with an assumption. I said, let's say it takes about 30

24:33

seconds per pull-up edit.

24:35

I figured that was fair, given that some peaks will be pretty easy to visualize

24:39

as pull-up others.

24:41

You'll have to go into the raw data, maybe do some calculations, things like

24:44

that.

24:45

So I averaged about 30 seconds per pull-up edit.

24:48

So we can say that if an applied biosystems kit had 70 pull-up peaks in five

24:53

samples, which is similar to the data that we saw in this study,

24:57

a non-applied biosystems kit would have three times that, or 210 pull-up peaks

25:01

in five samples.

25:03

So that would mean that it would take someone 35 minutes to analyze the applied

25:08

biosystems kit --

25:10

with an applied biosystems kit, five samples, just the pull-up peaks, versus an

25:14

hour and 45 minutes to analyze pull-up and five samples run with a non-applied

25:20

biosystems kit.

25:22

If you double that time, considering there has to be a secondary review, we're

25:26

talking about an hour and 10 minutes to analyze an applied biosystems kit, five

25:30

samples, just talking about pull-up,

25:33

versus three hours and 30 minutes for a non-AB kit, five samples, just talking

25:38

about pull-up.

25:44

So going over to the seek studio instrument, if you remember, we had two

25:47

different instruments, so this table is even a little bit busier than the last

25:51

one we looked at.

25:53

As a reminder, applied biosystems kits will have the auto-spectral enabled and

25:58

marker-to-marker correction.

26:01

The non-AB kits will just have auto-spectral enabled.

26:05

Again, it was a positive control and four non-probative samples that were

26:10

analyzed.

26:11

So, again, on the far right, you can see those complex pull-up peak counts, and

26:16

then on the left, you can see the traditional pull-up counts as well as the

26:19

percentages that were calculated.

26:22

Again, all kits had an average pull-up percentage of less than 3%.

26:28

All of the non-AB kits, in this case, had pull-up greater than 3%, and a few

26:35

had pull-up still greater than 5%.

26:39

So for those that like to look at things in a different fashion, I much prefer

26:46

this fashion.

26:48

Very similar, five kits across the X-axis, the number of total pull-up peaks

26:53

across up the Y-axis.

26:55

You can see broken down the complex pull-up that we saw in the non-AB kits, and

27:01

then the different percentage levels in the blue and purple colors.

27:06

You can see a difference between Global Filer IQC and NGM detect as far as the

27:11

number of pull-up peaks, 6 verse 21 for the two kits.

27:15

And so when we compared the applied biosystems kits to the non-AB kits, I did

27:20

split it out because there was that discrepancy.

27:23

So for Global Filer IQC, the other kits had at least 13 times the number of

27:28

pull-up related artifacts, and for NGM detect, at least four times the number

27:33

of pull-up related artifacts.

27:36

Again, considering time-saving, so how might this impact you if you're the

27:42

analyst and/or the reviewer?

27:46

Again, I assumed 30 seconds per pull-up edit, but I did split it out based on

27:51

the difference we saw with Global Filer IQC and NGM detect.

27:55

So for Global Filer IQC, if there were three pull-up peaks in five samples, the

28:00

other kits would have 13 times that, or 39 pull-up peaks in those five samples.

28:05

So this would mean a time savings from one minute, one and a half minutes,

28:10

excuse me, to analyze the three pull-up peaks in Global Filer IQC, verse 19 and

28:16

a half minutes in a non-AB kit.

28:19

Doubling the time for review, you have it see a 33-minute savings in time.

28:25

Sorry, a 36-minute savings in time.

28:29

For NGM detect, which would have, let's say, 10 pull-up peaks in five samples,

28:34

the other kit would then have four times that, or 40 pull-up peaks in those

28:39

samples.

28:40

So for initial analysis, we're looking at five minutes, verse 20 minutes,

28:45

whether you're running NGM detect or a non-AB kit, and double-a-in that time

28:49

for review, you're talking about a 30-minute time savings.

28:57

So in conclusion, pull-up reduction on both the 3500 and seek studio with the

29:02

latest data collection software versions greatly reduced the number of pull-up

29:08

related artifacts for the applied biosystems kits.

29:12

On the 3500, it reduced the pull-up artifacts by at least three times on the

29:19

seek studio, depending on the kit, four or 13 times the number of edits

29:25

reduction in the number of edits for the applied biosystems kits.

29:29

And just as a reminder, the seek studio does have that added level of pull-up

29:33

reduction, the marker-to-marker correction.

29:36

So taking into account the pull-up reduction and also some of the new features

29:40

I talked about at the very beginning of the presentation, both the 3500 data

29:45

collection version 4.0.1, and the seek studio with data collection version 1.2.

29:51

1 can streamline data analysis, especially when used in conjunction with ABSTR

29:57

kits.

29:58

Thank you so much for your time. That concludes my presentation, and I'm going

30:02

to pass the presentation over to Robert.

30:05

Okay, so today I'm going to be talking about the right tool for the job. I'm

30:08

Robert O'Brien from the Natural Forensic Science Technology Center at Florida

30:12

International University.

30:14

Before I begin, I just want to thank Michelle and Jamie.

30:18

What I'll be talking about is the applied biosystems seek studio genetic analy

30:26

zer for HID or the applied biosystems rapid ID system.

30:32

So the right tool for the right job, what we'll be looking at is the seek

30:40

studio and the rapid ID system. We'll be looking at systems and features, the

30:45

instrument operation, the maintenance, and some data that we have run on both

30:47

systems.

30:48

So first, let's talk about the seek studio versus the 3500.

30:53

So dimension wise, the seek studio instrument has a width of about 49.5

30:57

centimeters, a depth of 64.8 centimeters and height of 44.2 centimeters.

31:03

Whereas the 3500 has a width of 61 centimeters, a depth of 61 centimeters, and

31:08

a height of 72 centimeters.

31:10

What it is important to note is that when you, for the 3500, you do need to

31:14

have a clearance base about 122 centimeters in order to open the door.

31:19

So the seek studio has a much smaller imprint traditional CE instruments like

31:23

3500.

31:24

So it's definitely, it's going to be a shorter instrument and it's not as wide.

31:30

With usually makes it the most crucial factor since that dictates a space

31:34

needed on a bench top on a laboratory bench.

31:38

So if you consider the space that is needed to open the door of the 3500, you

31:43

can in fact fit two seek studios in that space.

31:46

And the seek studio door actually opens upwards instead of to the right.

31:53

So therefore the opening of the door of the seek studio does not take up any

31:59

additional space.

32:01

The seek studio instrument is a cartridge based instrument.

32:05

This puts the majority of the components of the CE system into one easy to

32:08

change cartridge.

32:10

The cartridge shown contains a capillary with four capillaries.

32:15

The palm of the CE system is actually part of the cartridge with the detection

32:21

windows sitting behind the optical cover.

32:22

The pop one allows the same cartridge to be used for fragment nastles and

32:26

sequencing.

32:27

The anode buff is also contained inside the cartridge.

32:30

The only other consumable that needs to be changed is the cathode buffer.

32:37

So the cartridge system makes changing of the capillary array very simple.

32:42

This reduces the training time that is needed to ensure that you have a perfect

32:45

alignment with the laser every time.

32:48

Its spatial does not need to be performed when it's starting a new cartridge as

32:53

the system performs an auto optical alignment every time you place it a new

32:57

cartridge.

32:58

Apart from the main cartridge, the cathode buffer and auto sampler are shown in

33:03

diagram as well as you can see from the arrows.

33:06

It has a 96 well plate or 8 strip tubes can sit directly on the auto sampler

33:12

which has a lid attached.

33:14

So you do not need a plate holder as you did for the 3500.

33:20

And this system uses the same plates as the 3500 so no spatial consumables will

33:24

be needed if you switch it from the 3500 to the C Studio.

33:28

A J6 spectral will be performed during the installation along with the AHD

33:32

install check which uses global file or ladder.

33:36

Since the system has auto spectral calibration as Jamie discussed, there is no

33:40

need to run in manual spectral calibration other than a single time.

33:45

The touchscreen display on the C Studio allows for full operation of the

33:50

instrument including plates set up on the instrument itself.

33:55

This means that a separate computer system is not used to operate the

33:58

instrument like 3500.

34:00

So this once again is reducing the amount of space and the footprint of the

34:04

instrument.

34:05

However, a desktop or laptop is available to interface with the instrument for

34:11

optional SAD control, a plate managed setup software and/or G-mapper IDX

34:18

version 1.6.

34:23

The C Studio was tested against the 3500.

34:26

C Studio's results were 100% concordable with the 3500 in the following data

34:30

sets.

34:31

We did 20 buckle swabs reference samples.

34:34

We did do a sensitivity series based on total input of DNA.

34:38

However, these were all normalized to one nanogram.

34:41

This actually data was used later on with the rapid ID.

34:45

We did a sensitivity based on volume samples with 4 microliters, 2 microliters,

34:49

1 microliters and 0.5 microliters.

34:51

For saliva, we did 8 microliters, 4 microliters, 2 microliters and 1 microlit

34:55

ers.

34:56

And for mixture samples, we did a 1 to 1, a 1 to 4, a 1 to 8 and a 1 to 16.

35:01

Once again, it's important to note that we ran the same plate that we ran on

35:05

the C Studio with us a 3500 and we got 100% concordance between the two

35:10

instruments.

35:16

The one difference I was noted for the data analysis, the C Studio does require

35:20

gene map or IDX version 1.6.

35:25

And this also shows that implementation of a C Studio does not in any way

35:29

affect the quality of the data that you're going to get from your CE instrument

35:35

So now we're going to look at the rapid ID system.

35:38

The rapid ID system is once you're going to talk about instrument

35:41

specifications.

35:42

So the height is about 48 centimeters, the length is 53 centimeters and it has

35:46

a width of 27 centimeters.

35:48

So therefore it makes it's very small and is very easily able to be placed into

35:52

nearly any room,

35:54

whether this be in the laboratory or booking station, etc.

35:58

The approximate weight is 28.4 kilograms with a primary cartridge installed and

36:03

25.4 kilograms with all the primary cartridge.

36:06

The only other space requirement that may be needed for this instrument is if

36:09

you're going to have a laptop next to it to get the data coming off of the

36:14

instrument itself.

36:16

However, you can have that laptop in a centralized location and have the data

36:21

sent to it. We'll talk about that later.

36:24

So based on these dimensions, many of the instruments could be placed alongside

36:28

each other.

36:29

So you can actually have a bank of these instruments and they can easily expand

36:33

the capability without requiring much more space on this larger predecessor,

36:37

the rapid ID 200.

36:39

The weight of the instrument also makes the instrument itself an easy one

36:43

person carry or in a case it could be a two person carry, which makes it ideal

36:48

for transporting two crime seats.

36:54

The rapid ID system is also a cartridge based system, so it has a primary

36:58

cartridge and a sample cartridge.

37:01

This allows for easy operation and maintenance of the instrument and because of

37:06

the ease of use very little training is required for the user.

37:15

The primary cartridge contains the following. It contains all the components

37:24

necessary for CE. It contains the Palma, which is shipped separately and stored refrigerated. It is loaded into the primary cartridge before the cartridge is placed into the

37:28

instrument.

37:29

It contains the capillary and anode and capillary buffers.

37:34

The primary cartridge is guaranteed for 100 samples. In tests with continued

37:38

daily use, it has been possible to get more than 100 samples out of the primary

37:43

cartridge.

37:44

The sample cartridges, there are two different sample cartridges. There is the

37:53

Ace Global File Express cartridge with the purple label and the Rapid Intel

37:57

cartridge with the pink label.

37:59

The sample cartridges can be stored up to six months when refrigerated or for

38:03

two months at room temperature and both cartridges use a Global File Express

38:08

chemistry.

38:09

Let's talk a little bit more about the Ace vs. the Intel cartridge.

38:19

The Ace Global File Express sample cartridge is intended for use with reference

38:23

samples like buckle swabs.

38:25

It can also be used for other high-level samples, for example, blood. The

38:30

runtime is approximately 90 minutes.

38:33

The Rapid Intel sample cartridge is intended for use with single-source crime-

38:37

scene type samples, examples to live and blood.

38:41

It has improved performance for low-level samples. It could be used for blood,

38:45

drinking containers, cigarette bugs and other similar items, and the runtime is

38:50

approximately 95 minutes.

38:53

The instrument is able to determine which runtime to use based on the cartridge

38:58

inserted. Therefore, the user does not have to make adjust runtimes depending

39:02

on what cartridge they're using. It's done automatically for you.

39:08

The Rapid Hit ID system comes with a laptop or desktop which has loaded onto it

39:12

the RapidLink software. Samples are instantly imported and can be used with the

39:17

software to generate matches or other investigative leads.

39:22

We'll talk a little bit about the RapidLink software now. The RapidLink

39:26

software can be used to link several instruments to one main computer.

39:31

The map shows the location of each instrument connected to the computer with

39:35

the RapidLink software.

39:37

If the location is flagged green, it means the instrument or instruments of

39:41

that location are all connected and operational.

39:45

The RapidLink software is used to determine the following. Location,

39:49

functionality, runs perform per day and runs perform per instrument.

39:55

So at the bottom left of the screen with the blue and red bars, this shows the

39:58

number of runs being performed per day and total across all instruments at that

40:03

location.

40:05

Blue means successful runs and red means unsuccessful.

40:10

At the bottom right, this shows the number of runs performed at each different

40:14

instrument. In this way, the operator of the software can track the reagent use

40:17

for each instrument.

40:19

This allows one central location to be in charge of reordering of reagents and

40:24

taking the burden off of the users at crime scenes or booking stations.

40:33

So very similar to Gene Mapper, we do have quality flags also on the RapidID

40:39

system. So at the end of the run, a quality flag appears on the screen of the

40:44

RapidID for the sample.

40:47

The colors are green, which basically means a sizing past. The profile can be

40:52

used in all functions of the RapidLink software.

40:55

Yellow means the sizing also has past but does require review before it can be

40:59

used in all functions of the RapidLink software and red means that the sizing

41:03

has failed.

41:05

These actually are displayed on the instrument itself, so the user is able to

41:08

tell instantly whether the sample is going to be a good sample, it's going to

41:12

require a view or may need to be run or if there was some other issue.

41:21

Regardless of the quality flag displayed on the instrument screen, the run

41:25

information is transferred to the RapidLink software and the quality flags are

41:30

also seen in the software.

41:32

So as you can see from the image on the right there, the quality flags are dupl

41:35

icated on the software. So if it's green, you'll see a green checkmark, if it's

41:39

yellow, you will see a box with a yellow arrow and if it is red, you'll see a

41:44

circle with a red X in between.

41:47

So that's how the quality flags are displayed. In this way, someone with access

41:52

to the RapidLink software can monitor the quality of the data coming off the

41:56

instrument and take any steps necessary in case of a problem taken to burden

42:00

off the user.

42:01

So one person at central location can see all the data coming off of all the

42:05

different instruments and be able to tell at a glance whether it's good quality

42:09

data coming off, whether that person perhaps will have to do reviews or if

42:14

there's several failures occurring,

42:16

they can actually address the problem and probably look into some troubles

42:19

hooting. So therefore, once again, the user, whether they're at a booking

42:22

station or crime scene, does not have to take on this burden.

42:30

Now there's some other features of the RapidLink software. So the RapidLink

42:33

software has additional apps or applications that can be purchased separately.

42:37

These apps have the following features. So first you've got the one to the far

42:41

left is a match app. This can be used to match any DNA profiles that are

42:45

imported into that computer with the RapidLink software.

42:49

So all the data being fed into this one computer with the RapidLink software

42:53

can be, you can use that to generate matches.

42:56

The familiar app can be used to do a familiar search of all profiles imported

43:00

into that RapidLink software.

43:03

The Kinship app can be used to verify a stated relationship between two

43:07

profiles and the SED, which stands for the staff elimination database.

43:13

The DNA profiles from staff members can actually be imported into the RapidLink

43:17

software and can be automatically compared to profiles imported into the

43:21

software to check for possible contamination.

43:25

It is important to note that since many instruments can be connected to one

43:29

computer with the RapidLink software, one advanced operator can perform all

43:34

these functions in one central location as opposed to having all the users,

43:38

whether the crime scene or booking station is attempting to carry out the

43:41

analysis.

43:42

The RapidLink software is built to allow users with minimal training to operate

43:46

the instrument while in more advanced use of or a DNA analyst, for example,

43:50

monitors the instruments and checks the quality of the data generated and

43:55

controlling how the data will be used in an investigation.

44:02

So let's look at some data generated from the RapidID.

44:06

So some of the studies that we ran on the RapidID to measure the performance of

44:10

the instrument. We did reference samples which are just simply buckle swabs.

44:13

We did a sensitivity series based on total nanogram input of DNA.

44:18

We did another sensitivity series based on the volume of DNA placed on the swab

44:22

We did a mixture detection where we were looking for the ability of the

44:25

instrument to detect a minor.

44:27

And of course we had a concordance with the CE 3500 and 6 studio.

44:34

So here's an example of 20 buckle swabs that were run in the RapidID system

44:38

which had a 100% first pass success rate.

44:42

This was done with the ACE cartridge.

44:44

These samples were all concordant with the samples run under 3500 and the CE

44:49

studio.

44:50

This means that all swabs registered a green quality flag.

44:54

It is important to note that for a sample to get a green flag, all alleles must

44:59

be present and called.

45:01

Therefore the first pass success rate means that there were full profiles

45:05

generated for all the swabs run and no reruns were necessary.

45:09

For males the profiles gave all 24 to 24 nails and for females 22 out of 22.

45:15

The image here shows how these were displayed on the RapidID link software

45:20

screen.

45:21

From left to right you have the quality flag, the date and time of the run, the

45:26

sample name, the cartridge being used,

45:29

the location, the instrument serial number and then the user.

45:33

So a sensitivity series is also done with the total input of DNA.

45:43

We basically did a range from 1280 nanograms to 10 nanograms.

45:47

So from the first as seen from the heat map from 1280 nanograms to 40 nanograms

45:55

all alleles were present.

45:57

For 20 nanograms and 10 nanograms there were some samples where dropout was

46:02

detected.

46:03

Even with dropout the lowest number of low side detected was 17 which is more

46:08

than enough for comparison purposes.

46:11

This allows the great dynamic range of the Intel cartridge and this is useful

46:17

for high and low level samples.

46:19

This was done with blood.

46:21

This makes it especially useful for crime scene samples where the amount of

46:25

input DNA is not known.

46:27

So with the RapidID ID Intel cartridge you do not have to worry about possibly

46:32

putting in too much blood or too little blood.

46:35

It's going to give you a good result either way.

46:39

We did a sensitivity series with sample volumes and here we have liquid saliva

46:48

was pipedted onto swan.

46:50

We also did that with blood samples.

46:54

We did various volumes of four microliters to microliters and one microliters.

47:00

And you can see at one microliters saliva there was still 85% of alleles

47:05

present.

47:06

This shows that RapidID is able to handle very low volumes of saliva with still

47:10

generating enough data to be used for comparison purposes.

47:14

We also did that with blood samples. We did various volumes of four microliters

47:21

, two microliters, one microliters and a 0.5 microliters.

47:24

And once again from the graph this shows that even at a volume of 0.5 microlit

47:28

ers 96% of the alleles were detected using the RapidID.

47:33

And this shows that the RapidID is suited for detection of DNA from very small

47:38

samples.

47:42

For mixture distractions, mixtures were run in the RapidID in the farmed

47:46

proportions, one to one, one to four, one to eight and one to 16.

47:51

The RapidID consistently detected the presence of the minor at the one to eight

47:55

proportion.

47:56

On some samples the minor was detected at the one to 16 proportion.

48:00

The ability to detect a minor at such low proportions provides utility for

48:05

crime scene type samples.

48:07

Perhaps indicating the presence of a low level male in a sample with mostly

48:10

female DNA.

48:11

And this information can be used by the laboratory to decide how to proceed

48:14

with further testing.

48:16

So, concordance for the 3500 in the seek studio. All samples tested in the

48:24

following studies gave them core entry results for the 3500 in the seek studio.

48:29

So, in all studies performed, sensitivity with total nanograms and volumes of

48:34

blood and saliva, the reference samples and the mixture studies, there was no

48:38

discordance observed.

48:40

Between the allele calls from the RapidID and those of the 3500 and the seek

48:45

studio.

48:46

Now it's important to note the RapidID system does not consume the sample.

48:52

The ACE and Intel cartridges allow easy removal of the swab.

48:57

Once removed, the swabs can be rerun using traditional methods of extraction,

49:02

quantitation, amplification and CE.

49:06

The sample's tested gave full DNA profiles after they were run on the RapidID.

49:11

This allows one swab to be used twice since the RapidID does not consume the

49:16

sample.

49:17

This becomes very useful if there is a limited number of samples or in case if

49:21

two swabs are taken and one fails to give a result.

49:25

So, this chart shows samples that will run the RapidID using the ACE cartridge.

49:34

The samples were removed from the ACE cartridge and placed in a hood to air dry

49:38

The time between when they were run on the RapidID and then run again using

49:42

traditional DNA methods range from a week to a month.

49:47

The DNA quantities shown are from the swabs after they were run on the RapidID.

49:52

In all cases, the samples gave full profiles using traditional CE testing.

49:59

Even with as little as 20 microgives of blood or one swab of the energy.

50:05

These samples also gave a full profile on the RapidID and the results were

50:10

recorded.

50:11

So, future testing is planned for lower level samples using the Intel cartridge

50:16

to see if we can also get DNA profiles after it's run on the Intel cartridge

50:21

with lower level samples.

50:26

So, candy systems we use together can they actually be integrated?

50:30

Well, both of them have a small footprint. Both require little maintenance and

50:35

both are very easy to run.

50:37

The RapidID is ideal for the booking station or crime scene unit, while the

50:42

seek studio is ideal for smaller full service DNA laptops, but can they exist

50:47

together?

50:48

One model would be to have, for example, the RapidID used for reference case

50:53

work samples.

50:54

The ease of use and minimal training required makes it ideal for technicians in

50:58

the laboratory to process these samples.

51:01

The high first pass success rate ensures good quality results.

51:06

In the same laboratory using robotic extraction platforms like the Automate,

51:12

with the Quant Studio 5 for quantitation and the Proflex for amplification and

51:15

the seek studio for CE, only crime scene type samples would need to be

51:20

processed using traditional DNA methods.

51:23

This would reduce the number of samples being run through the entire

51:26

conventional DNA process, so the full capillary system of the seek studio

51:31

should be adequate to meet those needs.

51:34

By running the reference samples on RapidID, there will also be a separation of

51:38

nodes and unknowns and preventing cross-contamination between the reference and

51:43

the question samples.

51:48

Let's do a little system comparison summary. The RapidID system has one cap

51:53

illary, whereas in the seek studio you have four capillaries.

51:57

With the RapidID system, it's very simple. Obviously there's no pipetting

52:01

required. The seek studio has easy with one-click universal cartridge.

52:06

The user level of experience for RapidID system is a technician or non-

52:11

technical operator.

52:13

Forensic lab experience is not required for the RapidID system.

52:16

For the seek studio trained forensic scientists, all technicians can easily

52:19

operate that system.

52:21

The runtime is 90 minutes from sample to answer on the RapidID, whereas the

52:26

seek studio is 39 minutes, which is the CE run.

52:30

But you also have your upstream processing time, that is DNA extraction, quant

52:34

ification and SDR amplification.

52:39

If you have sample type, the casework, you have the rapid intel cartridge,

52:44

whereas for the seek studio, for casework you have purified DNA.

52:48

For the database you can have a reference or rapid ACE cartridge. The database

52:53

reference, you can have a swab treated or untreated paper for the seek studio.

52:58

And testing environments for the RapidID system, you can have the forensic

53:01

laboratory, satellite laboratory, a mobile CSI unit, or police booking station.

53:08

For a seek studio, you can have obviously a full-service forensic science

53:11

laboratory, or you can have a smaller satellite laboratory, since it does not

53:15

require some space.

53:17

So in conclusion, the question which is the right tool for the job as a sample

53:23

answer.

53:24

Both the seek studio and RapidID as separate units can do the job they intended

53:29

to perform.

53:30

The seek studio is a small alternative to the 3500, where the RapidID for user

53:34

crime scenes and booking stations.

53:37

However, together they do complement each other, increasing efficiency of the

53:40

current laboratory, allowing the setup of a smaller satellite laboratory.

53:45

They are both the right tool for the right job of processing DNA samples.

53:49

So I just wish to thank you and thank Thermo Fisher, and we'll turn this back

53:56

over to Michelle.

53:59

Thank you, Robert, for that great information and insight.

54:02

Audience, it is now time for the Q&A portion of our webinar.

54:06

If you have not already, please take just a moment here to ask Robert and Jamie

54:11

a question using the Q&A dialog box on your screen.

54:15

So Robert, let's start with you.

54:18

What demo opportunities are available to try out these instruments?

54:23

So for basically FIU and FSC and FIU has a relationship with Thermo Fisher

54:28

scientific that we are there, center of excellence for rapid DNA testing.

54:33

So what we have in our facility is we actually have two RapidID units where we

54:38

can have people come in and see demonstrations of the units.

54:42

They can actually run the units themselves.

54:45

We have reagents available for testing.

54:49

Especially in this time right now where we are having some issues with COVID

54:53

and social distancing and so forth, we also do virtual demonstrations where we

54:58

assume we've already done five of those internationally for international

55:03

laboratories who are wanting to see the units.

55:06

And then also we have done some for local agencies also.

55:12

Some samples if you want, we have samples here that you can actually prepare

55:16

yourself and run and that way you get to actually get some hands on time with

55:19

the instrument.

55:21

We aren't selling it. We are not going to be able to discuss price, so that you

55:26

're going to have to go to Thermo Fisher scientific.

55:28

So really we're just a nice environment for people to come get some hands on

55:32

time with the instrument and get a feel for it and see how best they think they

55:36

can incorporate it into their laboratory.

55:40

That sounds awesome. How different is data analysis with the Rapid hit compared

55:46

to traditional CE?

55:48

Well the data looks the same, so you get especially since the RapidID is using

55:52

Global Father Express, so you are going to see your Global Father data.

55:56

I have used the 3500 M6 studio and switching over to the RapidID. There's

56:03

really no learning curve involved and looking at the data.

56:06

You can see all your PKI balances, your LEO calls, your PKI, basically

56:11

everything is the same.

56:14

So there really is no learning curve involved and the data is comparable.

56:18

Gotcha, that sounds easy. So Robert, tell us in your response to COVID-19, have

56:23

you had extreme difficulty making your demos virtual or what was the process

56:28

like to go ahead and get that back on for people that aren't able to get to the

56:33

lab and such?

56:35

No, the demo has actually been quite easy. We do them over the Zoom format and

56:38

we have cameras set up, so we are actually able to run a sample live and then

56:43

take a sample off so that everybody can actually see the beginning and end

56:47

process.

56:48

Then we move into the software and we basically go over the software with them,

56:52

show all the different features of the software.

56:55

Since it's all live, you can take questions live or we take questions after. We

56:59

can talk about our experiences with the software or any of the testing that we

57:03

have done here.

57:04

And show them the primary cartridge, the sample cartridge, basically anything

57:09

they need. So we haven't really had any issues with transitioning to the Zoom

57:13

and it's actually able to help us reach a broader audience.

57:17

Oh, that's great. Sometimes in the adaptation to the pandemic, we've definitely

57:21

found ways to get in touch with each other. So that's really great.

57:25

Jamie, let's turn our attention to you quickly. The tech studio had fewer

57:30

spectral artifacts.

57:33

Could I contribute that to the inclusion of marker-to-marker correction?

57:36

Yes, so that is part of the reason. The second level of pull-up reduction

57:42

offered with the SEEC Studio for Applied Biosystems Kit is that marker-to-mark

57:47

er correction factor that you enable by selecting the AB kit that you're running

57:53

when you're setting up the SEEC Studio.

57:55

But in addition, because the optics are different on the SEEC Studio and I

57:59

talked about instrument-specific pull-up on the 3500, which has to do with the

58:04

way the fluorescently labeled molecules move across the detection cell, you don

58:08

't see that due to the optical design on the SEEC Studio.

58:12

So you don't see that type of instrument-specific pull-up, which also takes

58:17

away a portion of a pull-up on the SEEC Studio that you see on the 3500, just

58:22

by the nature of the optical design.

58:24

Gotcha. Okay. So what is the benefit of using sample-specific spectral versus

58:30

using the generic spectral?

58:34

I'll take it that one for me too. So using the sample-specific spectral, so

58:39

that is how the pull-up reduction algorithms are working.

58:44

The benefit is that you're doing more of an apples-to-apples comparison when

58:50

you're using a matrix to deconvolute the dye data from your sample.

58:57

So when you use a manual calibration or the traditional spectral calibration

59:02

that we set up on the 3500, remember those dyes are pure dyes.

59:07

They're not attached to any fragments. So it's more of an orange-to-apples

59:12

correction.

59:14

And when you use sample-specific data, you're actually getting a matrix based

59:18

on dyes that are connected to fragments of DNA, just like your sample. So that

59:23

's where that more apples-to-apples deconvolution comes in.

59:27

Gotcha. Okay. Robert, let's get you back in the fold. Do you have any data

59:33

testing touch DNA samples on the rapid hit?

59:38

So we haven't really done a full study on the rapid and touch DNA. I mean, from

59:42

what I showed you, yes, we planted as low as one microliter of saliva on to a

59:47

swab and we were able to get a result.

59:50

We have done stuff paired with basically cell phones, but it's just been like

59:54

maybe just one cell phone, one firearm.

59:57

We've done water bottles, cigarette butts, and we've gotten good results.

01:00:02

Basically, we've gotten results that we're able to definitely eliminate someone

01:00:05

and even go so far to make the inclusion with it.

01:00:08

We do have plans to do larger studies to accomplish a lot more scenarios.

01:00:13

So that's also something that, you know, from the community, if you ever have

01:00:16

any ideas, if there's any need that you think that you would like to use the

01:00:21

rapid ID for, and you can send those directly to me.

01:00:24

When we design a study, we can try our best to include your suggestions so that

01:00:29

when we do produce something, it'll be something that actually the community

01:00:33

wants to see, and not just something that we sat down hand thought would be

01:00:36

best.

01:00:37

We're always looking for suggestions on what we can do to make our studies

01:00:40

better.

01:00:41

That sounds great.

01:00:42

Are Jamie over to you? How does the C-Studio do with difficult natures?

01:00:48

Nixtures.

01:00:49

Did you get the same results compared to 3,500?

01:00:52

I will answer from the C-Studio developmental validation that we did.

01:00:56

We did look at some nixtures.

01:00:58

We focused on two person nixtures up to one to seven and seven to one mixture

01:01:03

ratios, and the performance was similar between the 3,500 and the C-Studio.

01:01:10

I'm not sure if Robert had any more experience in the work he did, but...

01:01:15

We only did for right now just a two person mixture down to the one to 16, and

01:01:20

we got comparable results from the 3,500 and the C-Studio.

01:01:24

They performed the same.

01:01:26

We could do that later on doing more complicated mixtures, three and four

01:01:29

person mixtures, but we just had a two person and the lowest proportion we did

01:01:33

was a one to 16.

01:01:35

Okay, great.

01:01:37

Now you guys can both answer this question based on the instruments that you

01:01:40

talked about.

01:01:41

So Robert, we'll start with you.

01:01:44

What is the performance of Rapid Hit in analyzing samples exposed to extreme

01:01:49

environments?

01:01:51

So we've done limited testing on that.

01:01:54

I don't know if we've done anything with extreme environments to be honest with

01:01:57

you.

01:01:58

That is once again something that we are going to be doing more testing on,

01:02:02

because we do have those questions.

01:02:04

I know we've done some teeth, but they were from bodies that were left in the

01:02:08

ocean, and they didn't perform well, whereas teeth that were dried, and even

01:02:12

for 30 years, were stored in the dried environment.

01:02:15

We actually got results on, but just respect to extreme heat or mold or

01:02:21

bacteria, or we have not really done a lot of those studies yet.

01:02:25

It's still studies and progress, what we do here.

01:02:27

So like I said, once again, anything, there's any environment, or any

01:02:31

conditions that people specifically want to know about, please let us know so

01:02:35

that when we do plan these studies, we can incorporate them into our testing.

01:02:40

And Jamie, what about you?

01:02:41

What was the performance of CQEO like in analyzing samples exposed to extreme

01:02:45

environments?

01:02:47

So we actually just released an application note as it relates to bone testing,

01:02:52

and in that application note, we talk about bones that have been exposed to

01:02:56

some harsh environments, some formic acid.

01:03:00

There are some DVI cases in that case study, and that's focused on the rapid

01:03:06

hit ID with our Intel cartridges, and performance for some of those lower

01:03:12

quality bone samples that were exposed to harsh conditions.

01:03:16

Wasn't as great as the better quality samples, where we may only have seen a

01:03:21

partial profile, or just a few peaks to no peaks.

01:03:25

We also put out a poster that ran some of the same bone samples on the SEK

01:03:32

Studio using a PrEPFILER BPA extraction, and the bone performed, those low

01:03:38

quality bone samples exposed to harsh conditions for that subset of data.

01:03:43

I think there were 18 different bones. Did fare better on the SEK Studio in the

01:03:48

traditional CE workflow than in the rapid workflow?

01:03:53

I'm not sure if we have the contact information of the person who asked that

01:03:57

question, but I'd be happy to send both the application note and the poster to

01:04:02

them so that they could have a closer look at the data.

01:04:06

Great. Thank you, Jamie.

01:04:07

Yeah, we audience, don't worry. We do have your info from when you registered,

01:04:10

so I think we could definitely get that info to you, which is very kind.

01:04:14

All right. Next one, Jamie.

01:04:15

You mentioned the J6T die set. Our audience member, David, saw that it has the

01:04:21

TED die instead of the Ned die from the earlier J6 die set.

01:04:25

What was it? Was this just an upgrade, or does each die set apply to different

01:04:30

STR kits?

01:04:31

Yeah, so it is a die change. The TED die is slightly quieter die. It'll still

01:04:37

fluoresce in the yellow die channel.

01:04:40

We have certain kits that are designed using the J6T dies in particular. That

01:04:48

is our Verifiler Plus Kit.

01:04:52

One of our kits that is used solely with our Chinese customers, Global Filer,

01:04:59

Global Filer IQC, those are both J6, our original J6, and MGM the TED is a J6T

01:05:08

die set also.

01:05:10

That's used mostly by our European customers.

01:05:13

Robert, we have a few questions about RapidHit coming up for you. So tell us,

01:05:25

does the RapidHit ID automatically check the hit in the DNA database? If you have the application, so with the RapidID and the RapidLink software, if

01:05:30

it's in the RapidLink software, it automatically important to file something

01:05:34

into the software, and then you have the different applications.

01:05:37

The matching software, you actually have to ask it, tell it to make a match.

01:05:43

For the familial, you can do a familial search where it will search everything

01:05:46

that's in the RapidLink for familial match.

01:05:50

For the kinship, you have to put the stated relationship of what you are

01:05:54

checking to make sure that that relationship is what it's supported to be.

01:05:58

The only thing it automatically searches is the staff elimination database. So

01:06:03

once you set up your staff elimination database and you have those people in

01:06:07

there, anytime you run a sample, it will automatically tell you if that person

01:06:12

profile, if the profile develop, ends up matching someone in the staff

01:06:16

elimination database.

01:06:18

Okay, while we're getting close to time here, so Jamie, I'm going to ask you

01:06:23

the last question. Do you advise having seek studio for lab with many samples?

01:06:31

So the seek studio is a four-cap instrument, so a lab would definitely want to

01:06:36

consider their throughput when deciding what CE option would be best for them,

01:06:41

whether they're a high throughput lab and maybe a 3500 Excel with 24 caps might

01:06:48

be the right answer, or if they could use the seek studio with the four caps

01:06:56

can meet their throughput needs.

01:06:58

Of course, compared to the rapid hit ID, you're talking about four capillaries

01:07:03

versus a single capillary, but then again, you may have some more upfront steps

01:07:10

to do before running those four samples on the CE.

01:07:13

So it all depends on the overall workflow, and then looking at the labs

01:07:17

throughput and needs, I think, when you're making that decision.

01:07:21

All right, audience, that about wraps up all the time we have today. I'd like

01:07:24

to thank Thermo Fisher Scientific for sponsoring this webinar, our speakers,

01:07:28

Robert and Jamie, and of course you, the audience, for your attendance and

01:07:31

participation.

01:07:33

In 48 hours, this webinar will be available on demand if you would like to

01:07:36

watch it again, or share it with friends and colleagues.

01:07:40

Additionally, you will receive an email with information on how to obtain CE

01:07:43

credit documentation for your participation today.

01:07:47

The fourth webinar in this six part Future Trends in Forensic DNA Technology

01:07:51

series will be held on August 20th at 8am Pacific 11am Eastern.

01:07:57

You can register for tips, tricks and best practices for gaining efficiency in

01:08:01

your forensic DNA laboratory on the forensic website, www.forensicmag.com,

01:08:08

where you can also view other webinars in the series on demand.

01:08:12

Thank you and have a wonderful day.