Radiotherapy Margins



Treatment planning is based on a static view of a living, and therefore, a very non-static patient. With the development of conformal radiotherapy and computational planning techniques the need for uniform recording, reporting and prescribing arose to account for uncertainties arising from this problem.


ICRU Reports 50 and 62 filled this need by defining target volumes in radiotherapy including the Gross Tumour Volume, the visible tumour, the Clinical Target Volume, which includes the GTV and microscopic growth, and the Planning Target Volume which encompasses both the GTV and CTV as well as accounting for any deviations from our static model. ICRU defines the PTV as,


‘a geometric concept, to define appropriate beam sizes and beam arrangements, taking into consideration the net effect of all possible geometric variations and inaccuracies in order to ensure the prescribed dose is completely absorbed in the CTV’.


Analogous to the PTV a planning organ at risk volume is used for organs at risk. The geometric variations and inaccuracies quoted above refer to any variation between the planned and delivered dose distribution, no matter how small and is a measure of a plan’s robustness. The smaller the variation between the planned and delivered dose distribution, the greater the plan robustness. These residual errors are unavoidable despite efforts in variation reduction such as immobilisation and IGRT.


ICRU 62 recommends using two types of margins, and internal margin and a setup margin. By separating these margins another volume can be created called the internal Target Volume, to take into account internal errors from patient motion such are breathing, swallowing and bladder filling. An ITV is used in treatments such as lung planning as an extension of the CTV. The setup margin would then be added to this.


However the concept of internal and setup margins are not widely used and Stroom et al. (1999) and van Herk et al (2000) suggested that errors fall into two categories, treatment and preparation uncertainties.


Treatment uncertainties are random and act to blur the ideal dose distribution by a Gaussian distribution with a width dependent on the photon penumbra and the standard deviation of day to day variations in the CTV location.


Preparation uncertainties are present at every fraction of the treatment and are therefore systematic in nature. Using the shift invariance assumption, systematic uncertainties result simply in a shift of the dose distribution with the same direction and magnitude.  Systematic uncertainties include errors in target delineation, imaging uncertainties, and patient changes, such as weight loss.


Marcel van Herk also addressed the problem of how to combine both these types of error in an analytical model to determine the CTV-PTV margin. It was shown that it was incorrect to combine the standard deviations of the errors linearly due to the lack of correlation between systematic and random errors. It was acknowledged that the systematic error is stochastic across the patient population. Van Herk’s PTV margin for systematic errors is defined so that the CTV has a 90% chance of being in the PTV. This, now well known, margin recipe allows for 90% of patients treated to meet the ICRU recommendations, that the CTV receives at least 95% of the prescribed dose.


In this paper, Van Herk suggest redefining the PTV as


‘The volume defined in treatment room coordinates to which the prescribed dose must be delivered in order to obtain a clinically acceptable and specified probability that the prescribed dose is actually received by the CTV, which has an uncertain location.’



In this way uncertainties are condensed into the CTV-to-PTV margin. The static PTV represents the moving CTV and is therefore a useful method of evaluating a plan, by using DVHs to report the minimum dose to the CTV with a pre-specified confidence level. The PTV then becomes a surrogate target for both treatment plan optimisation and evaluation.


‘All models are wrong, but some are useful,’

-          George Box


Planning with a PTV has some limitations. It fails in cases where the CTV is close to the skin. We also need to be confident in quantifying and identifying errors specific for treatment site, modality used, and treatment regime to ensure optimal treatment. In the case of IGRT we have greater information about the systematic error, allowing for us to correct for it and reduce our margins. In the case of online imaging we even gain information about the random component and can correct for it using rigid shifts to the treatment table, aligning either soft tissue or bony landmarks in the daily image to the planning image. This again, allows for us to reduce the CTV to PTV margin. However, if we get this margin wrong, if we reduce it too much the tumour control probability will drop, as seen by Engel et al. If the margin is too big, we see greater incidences in normal tissue complication.


Furthermore, the PTV concept is still present in newer approaches where inhomogeneous doses are prescribed. Such cases are in conflict with several of the underlying assumption taken in van Herk’s original margin design. As a consequence, the indirect estimation of the confidence level becomes unreliable.


Intensity modulated particle therapy is one such example, where complex and inhomogeneous dose distributions are delivered, often with steep dose gradients. Along with the extra degree of freedom in the proton range, this renders the PTV an unsuitable planning tool.


An alternative to using margins is being developed by incorporating errors directly into the optimisation algorithm, as proposed by Unkelbach et al and Pflugfelder et al. This can be implemented due to the fact that in intensity modulated treatments there exist many solutions which are all dosimetrically equivalent. This ‘degeneracy’ of solutions can be used to reduce the sensitivity of the plan to uncertainties if they are incorporated into the optimisation process.


Ultimately, whether we use a margin, or our treatment planning system to calculate a probabilistic plan, it is all about balance. How we balance conformity with robustness, NTCP with TCP, and patient throughput with advanced, yet time consuming treatment techniques.


In a health service as is ours, we are balancing individualised treatments, and treating the population.





ICRU Reports 50 & 62

Nguyen Thai Binh’s PhD Thesis 2008


Estro 33 Monday and Tuesday

Monday and Tuesday

Teaching lecture: treatment planning techniques to handle geometrical and anatomical uncertainties: state of the art. M. Sohn, Germany

This talk was really interesting and really complicated (I was really tired) about probabilistic planning. The main thing I got from it was that this was a great solution for when margins fail, like in liver and lung, but despite the concept being around for years, it has never been developed. This was discussed and it is down to the vendors  not supplying it in their treatment planning systems.

Round table discussion: The future of medical physics in radiotherapy: academics vs. professional role.

I have to admit I missed the three talks prior to the discussion, but the discussion itself was really interesting and pretty heated.

The main theme was the argument about including research into the week by week work of the medical physicist and how this should be achieved. There was a large concern that the industry is now leading research and making decisions where it come to progress. It was claimed by one member of the audience that medical physicists ‘have lost the field’, several people agreed, but many felt we are on the verge of losing it.

The idea of medical physicists engaging in more research activities was supported by all, but how to implement it with such tough time constraints and many clinical was inconclusive. The only suggestion was research would have to be carried out in our own time and we must make a ‘personal sacrifice’ to ensure research in made. Some people argued that only physicists with a PhD should being doing research and others said we should not being doing it if it was not in our job description.

This then lead to should research activities be in the medical physicist job description. Thwaites argued that we are hired to do our job, our role is to fulfill our job descriptions and if we believe that we are fundamental to the clinical delivery of radiotherapy and that research needs to be a part of that service then we need to work to make the change at a higher level.

Research at a service level, for example getting the best out of our equipment is fundamental to the service and is in itself research. It is this type of ‘small r’ research that we should strive for in our jobs and to aim to publish as well as the capital letter ‘R’ research.

For me, what I took from this discussion was that worryingly, as medical physicists, it seems we do not understand fully what our role should be. In a multidisciplinary scenario, such as a hospital, this does not bode well for ensuring effective teamwork. It appears we feel we should be carrying out research, but that is not necessarily what we are employed to do, nor do we have the time or resources to do it. No one really discussed how essential and fundamental the medical physicist role, without research, is to the radiotherapy service or if there should be greater support for those coming from academia to the clinical service. In my case I feel I am in very lucky situation with a PhD based in a clinical department to give me a solid grounding in both areas and feel my role is to deliver the best service I can.

Symposium: Review of dose comparison metrics

Current status of gamma index, G. Budgell, UK

The gamma index has become an almost ubiquitous method for comparing dose distributions, especially for the routine verification of complex radiotherapy techniques. There are though a large number of variables which can affect the quoted result. These include normalisation point and method, region of interest selected, alignment, measurement device and phantom, the measurement plane or volume, software implementation and 2D versus 3D gamma.

*Some software applications such as alignment tools can die a real error such as MLC mis-alignment.

*The ROI chosen can also affect the percentage failure.

* It was also found that software upgrades can affect the gamma result. Example of same parameters, same plan, same equipment, new software = different gamma.

*The gamma is different manufacturer to manufacturer.

*When comparing 3D to 2D gamma, 3D gamma will give lower rates of failure as it counts out of plane dose gradients.

Advice is to always use absolute dose as normalised dose can hide errors. Consideration for phantom depth and the lack of heterogeneities also needs to be considered.

The bottom line was the concept of 3%/3mm is meaningless. We must think carefully about what we are doing.

Despite its limitations the gamma index is still a useful tool when used with care and understanding.


Debate and closing remarks: The house believes that large phase III trials remain the gold standard in radiation oncology.

For the motion: S. Bentzen, USA

Against the motion: A Dekker, Netherlands

This debate was brilliant fun and a great ending to Estro 33. Both speakers had fun and kept the audience engaged and in places laughing.

The debate started with a lot of examples of how non phase III trials had yielded poor data and how the trust can be lost if a trial is inadequate. The UK RAPPER study was one example. It ended with an extremely emotive example of how a poorly understood trial led to years of incorrectly treating premature infants which resulted in leaving then blind.

The next speaker began his speech arguing that in the previous talk all the examples were for drug trials. That this was not the case in radiotherapy as we need results for equipment. He discussed the impossibility of looking at long term survival statistics when we develop better technology and equipment within that time frame and the small number of patient samples available. The audience was asked to stand up if they knew at least 8% of their patients were in a trial. About 5 people stood up.

The motion was discussed on the floor, everyone joining in. In the end though, the majority voted against the motion that large phase three trials remain the gold standard in radiation oncology.

Sunday at Estro 33 – very particle orientated (plus my favorite topic of planning methods for determining optimal plans!)

Managed out of bed in a much more together and timely way this morning (as I forgot to order my Elekta dinner tickets for last night) for the 8am teaching lecture on automated Monte Carlo treatment planning and Pareto navigation by Seb Breedveld. This had been a tough decision as in the next room Marco Schwarz was discussing my PhD topic on range and geometric uncertainties in proton therapy, but I felt I knew a lot about this topic already!

8am teaching lecture: Automatic Multicriterial treatment planning and Pareto navigation. Seb Breedveld, Netherlands

Breedveld explained that he was asked to not make the talk too complicated, as everyone would be sleepy still at 8am. So he began with a shoe analogy.

Waving his shoes in the air he claimed, he hated shoe shopping and asked the audience for objectives for buying shoes. People called out attributes like comfort, colour, price and size. Breedveld exclaimed that size was a constraint not an objective (we all know we cannot get away with the wrong size) and got everyone laughing.

He explained that to compare the analogy fully to treatment planning, we must also be aware that we must buy the shoes today. So how do we buy the optimum shoes for today? This is the biggest challenge as we do not know what we want until we know what is available.

In Radiotherapy planning this reflects that we cannot choose the best plan for a patient until we know what is achievable. This requires that we have our aims and the possibilities up front.

I encountered the concept of using a Pareto surface in radiotherapy planning actually at ESTRO 30 in London 4 months into my PhD and loved its simplicity. The concept however, was paved by Thieke et al in 2003, so it is relatively new, to help us solve this problem.

Pareto navigation.

The Pareto surface has an infinite number of plans, however this infinite number of optimal plans is still infinitely smaller than the number of non-optimal plans available!

We cannot calculate all options on the Pareto surface and so have to use methods such as approximation to create the Pareto surface.

For example to calculate optimal plans with two parameters we are trying to optimize (2 dimensions) we need to calculate the plans at the two extremes. Then we calculate plans for several points in between and approximate the rest to achieve a Pareto front of optimal plans. To give an idea imagine if the number of plans required were 5 plans per dimension.

For this case where10 objectives were being used then we require 2002 plans to be calculated, and if 15 objectives are required then ~2million plans.

Imagine how many structures you use, and each constraint on each structure… Though the Pareto method id intuitive, great for research it is is time and computationally intensive and difficult to implement.  Also with so many optimal choices, how do we choose the optimal plan in the end?

‘Wish list’ approach

A novel method is the ‘wish list’ to help us solve the problem of, what is the optimal plan I can create today, is a site specific wish list with required hard constraints i.e. max, min doses and objectives i.e. maximize, minimize dose and each is assigned a priority.

The hard constraints are met, and then the system will work down through the wish list once each criterion is met in priority order. It should be noted not all wishes can be granted! This process is iterative and the whole process is automated.

The pros of this method is it is automated, including beam angles, and can be used in a planning study for re-planning and all plans are directly deliverable.

The cons are there is no feedback on sensitivity analysis and creating the site specific wish list in the first instance is time consuming and difficult.

Joint Symposium: Image guidance (or the lack of) in proton therapy

The symposium today was ‘Image guidance (or the lack of) in proton therapy.’ I was really excited about this symposium because of the speakers (Bolsi, Bortfeld & Kooy). I got the feeling that none of the speaker felt the side comment ‘or the lack of’ in the title of the symposium was necessary and did not address that part of the question.

Alessandra Bolsi (PSI) started the symposium off by outlining the clinical practice for image guidance carried out at PSI and offered us the new acronym ‘IGPT’ instead of IGRT to differentiate between proton and conventional radiotherapy image guidance.

Anyway…. Daily IGPT at PSI uses a planer CT, both horizontal and vertical planer scout views which are compared to a reference scan and aligned using bony landmarks. The process is semi automated and shifts are applied as table offset.

On their new Gantry 2 a CT on rails exists in room and they also have beams eye view X-ray imaging which can be obtained and compared to the DRR from planning. This imagining technique can also be used in fluoroscopy mode.

The importance of repeat imaging was highlighted as intervention can be made. AGAIN PSI showed the same patient examples of a nasal cavity filling and child gaining weight and a quick mention of surface imaging and proton imaging were mentioned.

Hanne Kooy (MGH) began with a couple of bold quotes that I believe are worth repeating

  1. That protons, are in fact superior and,
  2. It is a mistake f you copy what has been done for X-rays.

This is because protons can give direct feedback, we have knowledge of what, where and when the proton spot was delivered. We need to ensure that post and pre range maps are used in proton therapy and not rely on the image guidance use in conventional radiotherapy.

Thomas Bortfeld (MGH) discussed the many novel-imaging techniques that have been experimented with at MGH including looking at both the biology, such as measuring functional uptake, and the physics, for example prompt gamma and PET.

Bortfeld posed the question, can we know range to within 1mmm?

His answer is yes, but only sometimes.

- MRI can but only after treatment, so it is too late.

- PECT/CT can, but only if the distal edge is in the bine.

- Prompt gamma can, but so far only  been done in phantoms.

A big concern for other institutes, including the NHS, will be cost. To ensure prompt gamma adequately required an in room mobile PET/CT scanner.

In conclusion, IGRT (aka IGPT) is being carried out in proton therapy centres. It has been discussed by these experts that the equivalent image guidance such as CBCT in X-ray therapy is not the optimal method for protons. Current solutions are being developed, but they are not ready and may pose high finical implications.

My conclusion is it is probably best to do CBCT until then.


Toward accurate tissue characterization using dual energy CT for particle therapy beam dose calculation, Hugo Bouchard (USA).

I have seen this talk already, but there was so much interest in their claim that they could reduce the range uncertainty in HU, for all tissue, to less than 1mm. I pointed the talk out to Tony and suggested he attend it as he had missed it at the NPL PPRIG last month.

The Maastro group used a stoichiometric calibration of a dual energy CT scanner to determine the range. Their calibration was achieved using the (I think new HE) Gammax 467 inserts. The goal of this work is to demonstrate that once improved, the stoichiometric method of Schneider et al. (1996) for conventional CT scanners can be extended to dual energy CT (DECT) and yield higher accuracy in determining tissue parameters and proton and light-ion beam ranges as compared to current methods found in literature.’

Currently the error used by most centres exists within 3% over all tissues as measured by Schneider et al (1996). Simply summarized in soft tissues they did achieve within 1% errors, and in bone, within 2%. The ‘extra’ percent is to incorporate any artifacts in the imaging system itself.

Dual energy CT can use either one source that flicks between two energies, or two sources of different energies perpendicular to one another.

The results are summarized in the abstract ‘Using the DECT stoichiometric calibration method, maps of electron density and equivalent electron number of the Gammex phantom are determined with an average  accuracy of (0.3 +/- 0.4)% and (1.6 +/- 2.0)%, respectively. I-values of  the phantom are determined with an uncertainty of (4.1 +/- 2.7)% using  the established mathematical relation, leading to expected clinical  uncertainties in proton stopping power (216 MeV) of (0.5 +/-0.4)% over  all human tissues. At therapeutic energies, the uncertainty in the range  of protons and carbon ions, determined from the Bethe formula, is found below 1.3 mm and 0.6 mm respectively

 These finding show an improvement from the 1-3mm uncertainty in range by Paganetti 2012. However, as Tony pointed out in the Q&A, this was all modeled data using tissue equivalent materials, not real tissue. He pointed out that in the Schneider stoichiometric paper the large errors come from real bone.

Iba Lunchtime seminar

The last talk I want to mention was in the IBA lunch seminar that was given by Marco Swartz who has an IBA machine in Italy. He is an excellent and clear speaker, which made it an enjoyable and interesting talk justifying why his centre only uses spot scanning.

He highlighted the importance of robust evaluation and IGRT and claimed ‘the conformity race is over.’

One thing he mentioned that I found really useful was that they plan for 20% of treatments to be treated with photons incase of machine breakdown. Therefore each patient requires both proton the photon treatment plans.

In this talk we were shown a lot of very convincing treatment plans advocating the use of proton therapy.

Overall I really great and informative day. I also learnt that you  can book sessions with the vendors for demonstration of their software. To comE I will summarize some of the things I learnt about the Varian and Raysearch proton TPS, this was brilliant and I was just sad I ran out of time to visit Phillips. Also I will finish writing up my notes on the last 2 days of ESTRO 33.


Estro 33 notes from Saturday 5th 2014

ESTRO 33 – 4th-8th 2014 @Vienna

– a summary so far


Arriving to Vienna via Bratislava (due to the cheap Ryan air option) I realised I had arrived in Vienna a day early for the conference. The conference fully got underway on Saturday with pre-conference workshops on Friday and the opening ceremony that evening. So my options were to sneak into a pre-conference workshop, study, or go explore the city. I chose the latter.

That evening I wandered down to the Vienna Messe conference centre 15 minites from my hotel to attend the opening ceremony. The Italian ESTRO president, Valentini, opened the ceremony with a brief introduction. The following speakers (namely chairs at ESTRO) spoke graciously about the extreme efforts, the immense number of people and the organisition required to coordinate such an event as ESTRO 33 and continued to thank many people, especially the abstract reviewers. I should note that this years ESTRO saw the highest number of submitted abstracts (1843 abstracts!). It was the clinicians whom submitted the most abstracts, closely followed by us physicist, but it was the physicists that had the most abstarcts accepted. The opening ceremony took the opportunity to remind the audience of the culture of our host European city in the form of a performance from the Vienna Ballet who danced to three pieces (one of which was a waltz). Following this we were served wine and snacks in the exhibition centre, where I found myself feeling slightly bemused at watching a Cyberknife dance to a rap song (I will put the vid on twitter).


Saturday began with all the good intentions of arriving early and prepared. In reality I was late and lost and resorted to following a man, 4 steps behind, with an ESTRO lanyard (I tried, but couldn’t pluck up the courage to speak to him). Finally at the Messe centre I rushed past rooms named after Vieniese musicians until I found ‘STRAUSS 1’.

Teaching lecture

How to translate new biological concepts into radiation oncology –M. Baumann

The very rapid progress in high-precision radiation delivery and planning technology during the past decades has been rapidly translated into clinical practice and allowed the development of novel more efficient clinical strategies that will be further refined within the coming years. Further progress in personalized radiation oncology, however, needs the integration of biological information on the specific tumour and on surrounding normal tissues in the treatment strategy of patients.

The concept of using cancer stem cell volume as a surrogate (GTV) for dose prescription was described. It was seen in a GBM 60Gy trial that using PETra II allowed for post surgery uptake analysis to determine whether recurrence is likely to occur, and therefore allow for dose escalation in this area.

What was found in this work of using cancer stem cells to ‘dose escalate’ (NB. I am not sure if this is called as dose painting, or if dose painting is different as the mean dose to target remains the same as if you were not dose painting and in this case overall mean dose is increased in the dose escalation??) was that you could get different results even if treating tumours of the same histology and same volume and this was down to the cancer stem cell density. The stem cell density is directly related to the dose escalation required to achieve a 50% tumour control probability (TCP). This means that different tumours (in volume and histology) may require the same dose regime… so how do we determine CSC density??

…. I am not so sure, but think it involved dying cancer cells and counting…

Another option in designing individualised and biologically inspired radiotherapy regimes is to find tags that can determine if the cancer is radio-resistant. It was found that HPV+ H&N cancers are more radiosensitive than HPV- cancers. Therefore determine if HPV- then apply dose escalation.

Or biopsy, irradiate the biopsy, do some counting and then create the radiotherapy regime for that patient based on the irradiated biopsy outcome.

Baumann also talked about Hypoxia decreasing radio-sensitivity in the cancer cells and how PET and CT can be used to determine hypoxia and using F-MISO PET to image daily during the treatment as locations of hypoxia can change throughout treatment. However the location of hypoxic region and whether recurrence will occur in the hypoxic region are patient specific variables.

Finally, the topic of molecular targeting was discussed, mainly the use of Cetiximab which led to the question, why do some patients respond and some don’t?

Overall a really interesting lecture just outside my comfort zone but well structured that I remained engaged and understanding the main points!

After this lecture I then attended the RTT track (where I learnt that RTT means radiographer). The first of these talks I found really interesting and was given by a radiographer in Denmark where they successfully implemented online soft tissue registration for lung cancer. I am going to summarise the main points of this nice talk.

Adaptive Treatment Planning

RTT implementation of daily online soft tissue match for lung tumours – MH. Anderson, Denmark

  1. CBCT was used to carry out daily online IGRT of luncy treatments
  2. However, if matching to bony anatomy this can lead to tumour misses and heart wall over dose. Something horrible sounding can cause large soft tissue changes à atelectasis?
  3. So instead match to soft tissue using some Varium software that uses the two colour approach to visually match the daily CBCT to the planning CT.
  4. Training or RTTs included a lecture, e learning, hours of seminers, 3D visualisation, hands on work, an exam, supervision and follow up. It was shown to be successful!
  5. The first 8 weeks of clinical treatment was overseen by superusers and physcists carried out patient QA via offline review of all patients and all #s treated in this time on a weekly basis. Now this is down to 1 CBCT a week.
  6. It must be noted that geometrical couch shifts of up to 1cm may be required. In this case dose to spinal cord must be evaluated, as the spinal cord PRV is only 5mm.
  7. Relanning is required in the cases were…

-       the target receives a shift >1cm

-       the target on 3 consecutive days is =1cm as can assume it is systematic

-       The lympth nodes move >0.5cm

-       The lympth nodes move = 0.5cm on 3 consecutive days

  1. So it was found that RTTs are capable
  2. No Dr. input required as physicist did replanning
  3. 10.  Multidisciplinary collaboration is essential

11. 12 weeks training was substantial.

Following this I went to a physics track determined by the fact that I have recently started to understand the capabilities of our CBCT of our Elekta machines back in Cambridge and thought I could learn something useful for the department. There was a to of image processing involved that I found hard to follow simply because I have not studied this sort of computing since my MSc, really interesting and useful work though if you want to recalculate dose on your CBCT… think what we want to do is EpiGray which would require this.

Technical Aspects of Imaging

Motion and metal artefact reduction in CBCT with implanted cylindrical gold markers. – J Toftegaard, Denmark

The gold fiducial markers cause streaking artifacts in CBCT. Metal artifact Reduction (MAR) techniques however fail when there is motion present. A complex process, motion and metal artifact reduction (MMAR), of back projection, masking, reconstruction, forward projection and some modeling was used by this group with good results at removing artifacts… to be precise:

(1) Automatic segmentation of the cylindrical markers in the CBCT projections. A 3D model of the marker size, 3D orientation, 3D mean position and 3D trajectory is generated during segmentation by a probability based 3D motion estimation method.

(2) Removal of each marker in the projections by replacing the pixels within a mask centered at the segmented position with interpolated values. The mask is generated from the 3D model by projecting the marker shape.

(3) Reconstruction of a marker-free CBCT volume from the manipulated CBCT projections.

(4) Reconstruction of a standard CBCT volume with metal and motion artifacts from the original CBCT projections.

(5) Identification of the smallest 95% confidence volume for each marker in the CBCT volumes, i.e. the smallest volume that encompasses 95% of the 3D trajectory of Step 1.

(6) Generation of the final MMAR CBCT reconstruction from the marker-free CBCT volume by replacing the voxels in the 95% confidence volume with the corresponding voxels of the standard CBCT volume.’

Screenshot 2014-04-05 20.53.15

Half fan CBCT scans were used with 28 patients (15 liver) and reconstruction done with Varian’s iTools and a their inhouse matlab program.


-The limitation is that the method must find the markers (MMAR> than MAR)

-Scoring is based on number of streaks counted and HU deviation. (MMAR>MAR)

-MAR masks 4x more pixels than MMAR  which leads to artificial homogeneities in HU.

-MMAR works also for 4D.

There were then A LOT of inconclusive talks about dose painting. I think I even saw two on dose painting using DPBN from PET FDG imaging in lung cancer. Both inconclusive. But I did learnt he difference between dose painting by contours and dose painting by numbers. By numbers means literally the dose scaling if proportional to uptake in the functional image of whatever label you used… seems a little un-robust?

David Thwaites then spoke on the past, present and future of medical physics in his talk ‘Back to the future’ in the Emmanuel van der Schueren award lecture. I wish I couldn’t remember some of the brilliant facts, but my memory is not great with facts, but overall medical physics ha been a round a long time and the first Linacs were built and used for treatment in the UK!

Screenshot 2014-04-05 20.57.07

After catching up with some friends over lunch that we sneaked out of the Raysearch lunch time talk (I really should have stayed in this talk, but I had not seen my friends since a PARTNER workshop back in 2011 so please forgive me!) I caught the results from the CHHiP trial. I was a bit disappointed as the speaker rushed through this talk so fast I could not make notes for everyone back home and absorb the content. It turned out the speaking was speaking twice in a row and apparently wanted more time, even skipping questions, to talk about the results from the PIVOTAL trial instead.


Prostate Cancer

IGRT for prostate cancer ñ results from the CHHiP IGRT phase II sub- study
D. Dearnaley, UK

The CHHiP trial (as explained to in 5 mins by Simon Thomas) is a randomised clinical trial in the UK at 16 UK centres including about 3000 prostate treatment with radiotherapy and consists of three dose regime arms

37# 2Gy, 19# 3Gy & 20# 3Gy

These arms are then further randomised into no IGRT, IGRT and IGRT with tighter margins. However, some centres already routinely treat with IGRT and it was deemed unethical to then stop using IGRT and so this was not an arm at some centres.

The main endpoint was PSA as a survival surrogate. And also looked at Q of L and toxicity.

The main treatment planning consisted of field in field type forward IMRT and inverse IMRT.

The results presented were from only 293 patients at 2 years (hope we didn’t kill the rest!) , and only the results from the IGRT arms were presented.

All I got from the talk really was that with no IGRT bladder and rectum toxicity was worse than when IGRT is used and that using smaller margins and IGRT gave the least toxicity. I found this kind of obvious though and nothing was discussed of any of the nuances of this trial.


At 2 years G≥2 bladder/bowel toxicity was: 4.8 (90%CI: 0.9-14.2)% ;5.8 (2.7-10.6)% and 3.9 (1.4-8.7)% in groups A, B & C respectively. No grade 4 RTOG bladder/bowel late events were reported.

Conclusions: All treatment groups including no IGRT & IGRT with reduced margins experienced 2 year grade≥2 RTOG bladder/bowel toxicity of less than 10%.’


 After this I could not focus much longer and wandered into a physics track and had my attention captured by a talk of robustness of TCP in proton therapy for prostate using daily IGRT shift data (sounds like my work right!!!).

And that about concludes Day 1 of ESTRO 33!

Don’t expect such a write up tomorrow as the boat race is on and I will watch it at the Lia rowing club on the Alt Donau. :D Night!


Presenting at NPL

My slides from the NPL course this week. Will try write feedback soon!

  Incorporating and Managing Uncertainties in PBT

ICMP 2013, Brighton

Addenbrooke’s Medical Physics department had several abstracts for both poster and oral presentations accepted at the International Conference of Medical Physics 2013 (ICMP 2013) held at Brighton.

There was a particle therapy session with speakers Ranald Mackay from the Christie discussing the process of bringing protons to Manchester, Tony Lomax from PSI speaking on 4d proton treatment, Hanne Kooy from MGH speaking about their experiences and the current challenges that still exist and David Thwaites speaking about his work in the field.

I presented in this session titled A retrospective study of proton treatment plan robustness.

I felt the conference was not entirely well attended (or the rooms were just too big) but the talks were well presented, relevant and interesting with several speakers from abroad centres. The most disappointing aspect was the lack of discussion and questions in every session, people seemed reluctant to start discussion? I am not sure if this was due to rooms being so large or the desire to stick to schedule?

As a department we were very well represented and showed how strong research at Addenbrooke’s is within medical physics.

We also had quite a lot of fun playing in the arcade on the Pier and chilling on the beach at lunch. Somewhere there is a video of one of my supervisors on the dodgems I will try find it!

20130903_13311420130902_211410 20130903_13312020130902_20520620130902_222713 20130903_133100 20130902_205229    20130902_211452 20130902_205220    20130902_204018 20130902_20125920130902_20430220130902_19191920130902_10062120130902_205211  20130902_191729   20130903_171735 Med Phys visit Brighton!


















Work in Progress Talk 2011
Selwyn College, Cambridge

Selwyn College, Cambridge


Click here for my WIP talk I presented at Selwyn College, University of Cambridge in 2011 on Radiotherapy and my PhD topic.

Selwyn WIP talk 2011

(not all images are my own)

Lessons learned – Halfway Thesis

1. Commitment – Always commit to your decisions. 2. Sticks and Carrots – I am motivated by the stick, not the carrot. Know what motivates you and use it. I need to set deadlines, you may need to receive rewards. 3. Weaknesses and Fears – Taking on a PhD makes you see the worst in […]

Betrayed by nature

bbnDr Robin Hesketh is a fellow at the University of Cambridge, and like me, a member of Selwyn College. Dr Heketh is a scientist who has worked all his life in research institutes and universities and for 25 years has worked in the biochemistry department at Cambridge. His major research area is the development of strategies for the treatment of cancer and he lectures in various undergraduate courses on cell and molecular biology and cancer. He is also the author of ‘Betrayed by Nature‘ which describes the history of cancer research, cancer biology and methods of detection and treatment.

Dr Hesketh has kindly posted an entry I have written about Radiotherapy on his blog titled A Radiant Visitor.

Our NHS, Our cuts, Our losses

  The NHS grew out of the devastation of the second world war and is for many thought of with pride as a ‘testimony to human altruism’. In Cambridge I am continuously impressed by the diversity of students that come to study here. I enjoy observing our differences and our similarities, and learning of other cultures and […]