Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Last revision Both sides next revision
manual_part_1 [2009/08/26 14:51]
clare created
manual_part_1 [2009/08/27 09:51]
clare
Line 1: Line 1:
- +//\\ //​====== ​2             ​Introduction ======
-//\\ //​====== ​2             Introduction ======+
  
 This document describes the kernel operation of TOPAS together with its macro language. This includes the fully featured program TOPAS as well as its variants TOPAS R and TOPAS P. This document describes the kernel operation of TOPAS together with its macro language. This includes the fully featured program TOPAS as well as its variants TOPAS R and TOPAS P.
Line 14: Line 13:
 //xdd…// //xdd…//
  
-//bkg                         ‘// Background+//bkg                         ​‘// Background
  
-//​str… ​                      ​‘// Structure information for Rietveld refinement+//str…                       ​‘// Structure information for Rietveld refinement
  
-//​xo_Is… ​                  ​‘// 2q - I values for single line or whole powder pattern fitting+//xo_Is…                   ​‘// 2q - I values for single line or whole powder pattern fitting
  
-//​d_Is… ​                    ​‘// d - I values for single line or whole powder pattern fitting+//d_Is…                     ​‘// d - I values for single line or whole powder pattern fitting
  
-//​hkl_Is… ​                 ‘// lattice information for Le Bail or Pawley fitting+//hkl_Is…                  ​‘// lattice information for Le Bail or Pawley fitting
  
-//​fit_obj… ​                 ‘// User defined fit models+//​fit_obj…                  ​‘// User defined fit models
  
-//hkl_Is_from_hkl4 ​   ​‘// Structure factors (F<​sub>​obs</​sub>​)<​sup>​2</​sup>​ for creating a powder pattern from single+//hkl_Is_from_hkl4    ​‘// Structure factors (F<​sub>​obs</​sub>​)<​sup>​2</​sup>​ for creating a powder pattern from single
  
-//​‘// ​ ​crystal ​data+//​‘// ​ crystal ​data
  
 //str//, //xo_Is//, //d_Is// and //hkl_Is// are referred to as "​phases"​ and the peaks of these "phase peaks"​. A full listing of the data structures are given in section 8.1. //str//, //xo_Is//, //d_Is// and //hkl_Is// are referred to as "​phases"​ and the peaks of these "phase peaks"​. A full listing of the data structures are given in section 8.1.
  
-===== 2.1        ​Conventions =====+===== 2.1        ​Conventions =====
  
 The following are used in this manual: The following are used in this manual:
  
-Ø       Keywords are in //​italics//​.+Ø       ​Keywords are in //​italics//​.
  
-Ø       Keywords enclosed in square brackets [ ] are optional.+Ø       ​Keywords enclosed in square brackets [ ] are optional.
  
-Ø       Keywords ending in ... indicate that multiple keywords of that type are allowed.+Ø       ​Keywords ending in ... indicate that multiple keywords of that type are allowed.
  
-Ø       Text beginning with the character # corresponds to a number.+Ø       ​Text beginning with the character # corresponds to a number.
  
-Ø       Text beginning with the character $ corresponds to a User defined string.+Ø       ​Text beginning with the character $ corresponds to a User defined string.
  
-Ø       E, !E or N placed after keywords have the following meaning:+Ø       ​E, !E or N placed after keywords have the following meaning:
  
 E: An equation (ie. = a+b;) or constant (ie. 1.245) or a parameter name with a value (ie. lp 5.4013) that can be refined E: An equation (ie. = a+b;) or constant (ie. 1.245) or a parameter name with a value (ie. lp 5.4013) that can be refined
Line 56: Line 55:
 To avoid input errors it is useful to differentiate between keywords, macros, parameter names, and reserved parameter names. The conventions followed so far are as follows: To avoid input errors it is useful to differentiate between keywords, macros, parameter names, and reserved parameter names. The conventions followed so far are as follows:
  
-Keywords ​                                 ​: all lower case+Keywords                                  ​: all lower case
  
-Parameter ​names                      ​: first letter in lower case+Parameter ​names                      ​: first letter in lower case
  
-Macro names                            ​: first letter in upper case+Macro names                            ​: first letter in upper case
  
-Reserved parameter ​names       ​: first letter in upper case+Reserved parameter ​names   ​:     ​: first letter in upper case
  
-===== 2.2        ​Input file example (INP format) =====+===== 2.2        ​Input file example (INP format) =====
  
 The following is an example input file for Rietveld refinement of a phase mixture of corundum and fluorite: The following is an example input file for Rietveld refinement of a phase mixture of corundum and fluorite:
Line 96: Line 95:
 Trigonal(@ 4.759, @ 12.992) Trigonal(@ 4.759, @ 12.992)
  
-site Al x 0          ​0   z @ 0.3521  ​occ Al+3 1  ​beq @ 0.3+site Al x 0          ​0   z @ 0.3521  ​occ Al+3 1  beq @ 0.3
  
-site O  ​x @ 0.3062   0   ​z ​   ​0.25   occ O-2  1  ​beq @ 0.3+site O  x @ 0.3062   ​0   z    ​0.25   ​occ O-2  1  beq @ 0.3
  
 scale @ 0.001 scale @ 0.001
Line 110: Line 109:
 Cubic(@ 5.464) Cubic(@ 5.464)
  
-site Ca    ​0       0      ​0      ​occ Ca 1   beq @ 0.5+site Ca    ​0       ​0      ​0      ​occ Ca 1   beq @ 0.5
  
-site F     x 0.25    ​y 0.25   z 0.25   occ F  1   beq @ 0.5+site F     ​x 0.25    ​y 0.25   ​z 0.25   ​occ F  1   beq @ 0.5
  
 scale @ 0.001 scale @ 0.001
Line 140: Line 139:
 The variants TOPAS P and TOPAS R support the fit objects as indicated in Table 2‑1. Descriptions of unsupported fit objects and their dependents in this manual may be ignored by the user. The variants TOPAS P and TOPAS R support the fit objects as indicated in Table 2‑1. Descriptions of unsupported fit objects and their dependents in this manual may be ignored by the user.
  
- + 
  
 Table 2‑1: Fit objects supported by TOPAS and its variants. Table 2‑1: Fit objects supported by TOPAS and its variants.
Line 149: Line 148:
 | //str// | ü | ü | û | | //str// | ü | ü | û |
  
-===== 2.3        ​Test examples =====+===== 2.3        ​Test examples =====
  
 The directory TEST_EXAMPLES contains many examples that can act as templates for creating INP files. In addition there are charge-flipping examples found in the CF directory and indexing examples in the INDEXING directory. The directory TEST_EXAMPLES contains many examples that can act as templates for creating INP files. In addition there are charge-flipping examples found in the CF directory and indexing examples in the INDEXING directory.
  
-====== ​3             Parameters ======+====== ​3             ​Parameters ======
  
-===== 3.1        ​When is a parameter refined =====+===== 3.1        ​When is a parameter refined =====
  
 A parameter is flagged for refinement by giving it a name. The first character of the name can be an upper or lower case letter; subsequent characters can additionally include the underscore character %%'​_'​%% and the numbers 0 through 9. For example: A parameter is flagged for refinement by giving it a name. The first character of the name can be an upper or lower case letter; subsequent characters can additionally include the underscore character %%'​_'​%% and the numbers 0 through 9. For example:
Line 169: Line 168:
 site Zr x 0 y 0 z 0 occ Zr+4 1 beq @ 0.5 site Zr x 0 y 0 z 0 occ Zr+4 1 beq @ 0.5
  
-or,   ​site Zr x 0 y 0 z 0 occ Zr+4 1 beq @b1 0.5+or,   site Zr x 0 y 0 z 0 occ Zr+4 1 beq @b1 0.5
  
 The b1 text is ignored in the case of @b1. The b1 text is ignored in the case of @b1.
  
-===== 3.2        ​User defined parameters - the prm keyword =====+===== 3.2        ​User defined parameters - the prm keyword =====
  
 The [//​prm|local//​ E] keyword defines a new parameter. For example: The [//​prm|local//​ E] keyword defines a new parameter. For example:
  
-prm b1 0.2   ' b1 is the name given to this parameter+prm b1 0.2   ' b1 is the name given to this parameter
  
-             ' 0.2 is the initial value+             ​' 0.2 is the initial value
  
-site Zr x 0 y 0 z 0 occ Zr+4 0.5  ​beq = 0.5 + b1;+site Zr x 0 y 0 z 0 occ Zr+4 0.5  beq = 0.5 + b1;
  
-                    ​occ Ti+4 0.5  ​beq = 0.3 + b1;+                    ​occ Ti+4 0.5  beq = 0.3 + b1;
  
 Here b1 is a new parameter that will be refined; this particular example demonstrates adding a constant to a set of //​beq'//​s. Note the use of the '​='​ sign after the //beq// keyword indicating that the parameter is not in the form of N #value but rather an equation. In the following example b1 is used but not refined: Here b1 is a new parameter that will be refined; this particular example demonstrates adding a constant to a set of //​beq'//​s. Note the use of the '​='​ sign after the //beq// keyword indicating that the parameter is not in the form of N #value but rather an equation. In the following example b1 is used but not refined:
Line 189: Line 188:
 prm !b1 .2 prm !b1 .2
  
-site Zr x 0 y 0 z 0 occ Zr+4 0.5  ​beq = 0.5 + b1;+site Zr x 0 y 0 z 0 occ Zr+4 0.5  beq = 0.5 + b1;
  
-                    ​occ Ti+4 0.5  ​beq = 0.3 + b1;+                    ​occ Ti+4 0.5  beq = 0.3 + b1;
  
 Parameters can be assigned the following attribute equations that can be functions of other parameters: Parameters can be assigned the following attribute equations that can be functions of other parameters:
Line 225: Line 224:
 x @ 0.1234 x @ 0.1234
  
-min       = Val-.2;+min       ​= Val-.2;
  
-max       = Val+.2;+max       ​= Val+.2;
  
-update ​   ​= Val + Rand(0, 1) Change;+update    ​= Val + Rand(0, 1) Change;
  
 stop_when = Abs(Change) < 0.000001; stop_when = Abs(Change) < 0.000001;
  
-===== 3.3        ​Parameter constraints =====+===== 3.3        ​Parameter constraints =====
  
 Equations can be a function of parameter names providing a mechanism for introducing linear and non-linear constraints,​ for example, Equations can be a function of parameter names providing a mechanism for introducing linear and non-linear constraints,​ for example,
  
-site Zr x 0 y 0 z 0 occ Zr+4 zr 1      ​beq 0.5+site Zr x 0 y 0 z 0 occ Zr+4 zr 1      ​beq 0.5
  
-                    ​occ Ti+4 = 1-zr;   ​beq 0.3+                    ​occ Ti+4 = 1-zr;   beq 0.3
  
 Here the parameter zr is used in the equation "= 1-zr;"​. This particular equation defines the Ti+4 site occupancy parameter. Note, equations start with an equal sign and end in a semicolon. Here the parameter zr is used in the equation "= 1-zr;"​. This particular equation defines the Ti+4 site occupancy parameter. Note, equations start with an equal sign and end in a semicolon.
Line 247: Line 246:
 site Zr x 0 y 0 z 0 site Zr x 0 y 0 z 0
  
-occ Zr+4 zr      1  ​min=0; max=1; ​ beq 0.5+occ Zr+4 zr      1  min=0; max=1;  beq 0.5
  
-occ Ti+4 = 1-zr;                   ​beq 0.3+occ Ti+4 = 1-zr;                   ​beq 0.3
  
 here zr will be constrained to within 0 and 1. //​min/////​max//​ are equations themselves and thus they can a function of named parameters. here zr will be constrained to within 0 and 1. //​min/////​max//​ are equations themselves and thus they can a function of named parameters.
Line 275: Line 274:
 here the constraints are formulated without the need for a parameter name. here the constraints are formulated without the need for a parameter name.
  
-===== 3.4        ​The local keyword =====+===== 3.4        ​The local keyword =====
  
 The //local// keyword is used for defining named parameters as local to the top level, xdd level or phase level. For example, the following code fragment: The //local// keyword is used for defining named parameters as local to the top level, xdd level or phase level. For example, the following code fragment:
Line 303: Line 302:
 gauss_fwhm = a; ‘ 2<​sup>​nd</​sup>​ xdd gauss_fwhm = a; ‘ 2<​sup>​nd</​sup>​ xdd
  
-the 1<​sup>​st</​sup>​ //xdd// will be convoluted with a Gaussian with a FWHM of 1 and the 2<​sup>​nd</​sup>​ with a Gaussian with a FWHM of 2.  In other words the 1<​sup>​st</​sup>​ //​gauss_fwhm//​ equation uses the ‘a’ parameter from the top level and the second //​gauss_fwhm//​ equation uses the ‘a’ parameter defined in the 2nd //xdd//. This is analogous, for example, to the scoping rules found in the c programming language.+the 1<​sup>​st</​sup>​ //xdd// will be convoluted with a Gaussian with a FWHM of 1 and the 2<​sup>​nd</​sup>​ with a Gaussian with a FWHM of 2.  In other words the 1<​sup>​st</​sup>​ //​gauss_fwhm//​ equation uses the ‘a’ parameter from the top level and the second //​gauss_fwhm//​ equation uses the ‘a’ parameter defined in the 2nd //xdd//. This is analogous, for example, to the scoping rules found in the c programming language.
  
 The following is not valid as b1 is defined twice but in a different manner. The following is not valid as b1 is defined twice but in a different manner.
Line 335: Line 334:
 //local// can greatly simplify complex INP files. //local// can greatly simplify complex INP files.
  
-===== 3.5        ​Reporting on equation values =====+===== 3.5        ​Reporting on equation values =====
  
 When an equation is used in place of a parameter '​name'​ and '​value'​ as in When an equation is used in place of a parameter '​name'​ and '​value'​ as in
Line 347: Line 346:
 After refinement the "​0"​ is replaced by the value of the equation. The error associated with the equation is also reported when //​do_errors//​ is defined. After refinement the "​0"​ is replaced by the value of the equation. The error associated with the equation is also reported when //​do_errors//​ is defined.
  
-===== 3.6        ​Naming of equations =====+===== 3.6        ​Naming of equations =====
  
 Equations can be given a parameter name, for example: Equations can be given a parameter name, for example:
Line 371: Line 370:
 Non-sequential evaluation of equations are possible as parameters cannot be defined more than once with different values or equations; the following examples leads to redefinition errors: Non-sequential evaluation of equations are possible as parameters cannot be defined more than once with different values or equations; the following examples leads to redefinition errors:
  
-prm a1 = 2;     ​prm a1 = 3;  ‘ redefinition error+prm a1 = 2;     ​prm a1 = 3;  ‘ redefinition error
  
-prm b1 = 2 b3;  prm b1 = b3; ‘ redefinition error+prm b1 = 2 b3;  prm b1 = b3; ‘ redefinition error
  
-===== 3.7        ​Parameter errors and correlation matrix =====+===== 3.7        ​Parameter errors and correlation matrix =====
  
 When //​do_errors//​ is defined parameter errors and the correlation matrix are generated at the end of refinement. The errors are reported following the parameter value as follows: When //​do_errors//​ is defined parameter errors and the correlation matrix are generated at the end of refinement. The errors are reported following the parameter value as follows:
Line 383: Line 382:
 Here the error in the lp parameter is 0.0012. The correlation matrix is identified by //​[[#​k003|C_matrix_normalized]]//​ and is appended to the OUT file if it does not already exist. Here the error in the lp parameter is 0.0012. The correlation matrix is identified by //​[[#​k003|C_matrix_normalized]]//​ and is appended to the OUT file if it does not already exist.
  
-===== 3.8        ​Default parameter limits and LIMIT_MIN / LIMIT_MAX =====+===== 3.8        ​Default parameter limits and LIMIT_MIN / LIMIT_MAX =====
  
-Parameters with internal default //​min/////​max//​ attributes are shown in Table 3‑1. These limits avoid invalid numerical operations and equally important they stabilize refinement by directing the minimization routines towards lower <​sub>​{{Technical_Reference%20V4-1_files:​image002.gif?​20x23}}</​sub> ​values. Hard limits are avoided where possible and instead parameter values are allowed to move within a range for a particular refinement iteration. Without limits refinement often fails in reaching a low <​sub>​{{Technical_Reference%20V4-1_files:​image002.gif?​20x23}}</​sub>​. User defined //min/max// limits overrides the defaults. //​min/////​max//​ limits should be defined for parameters defined using the //​prm|local//​ keyword.+Parameters with internal default //​min/////​max//​ attributes are shown in Table 3‑1. These limits avoid invalid numerical operations and equally important they stabilize refinement by directing the minimization routines towards lower <​sub>​{{techref_files:​image002.gif?​20x23}}</​sub>​ values. Hard limits are avoided where possible and instead parameter values are allowed to move within a range for a particular refinement iteration. Without limits refinement often fails in reaching a low <​sub>​{{techref_files:​image002.gif?​20x23}}</​sub>​. User defined //min/max// limits overrides the defaults. //​min/////​max//​ limits should be defined for parameters defined using the //​prm|local//​ keyword.
  
 Functionality is often realized through the use of the standard macros as defined in TOPAS.INC; this is an important file to view. Almost all of the //prm// keywords defined within this file have associated limits. For example, the CS_L macro defines a crystallite size parameter with a //min/max of// 0.3 and 10000 nanometers respectively. Functionality is often realized through the use of the standard macros as defined in TOPAS.INC; this is an important file to view. Almost all of the //prm// keywords defined within this file have associated limits. For example, the CS_L macro defines a crystallite size parameter with a //min/max of// 0.3 and 10000 nanometers respectively.
Line 391: Line 390:
 On termination of refinement, independent parameters that refined close to their limits are identified by the text "​_LIMIT_MIN_#"​ or "​_LIMIT_MAX_#"​ appended to the parameter value. The '#'​ corresponds to the limiting value. These warnings can be surpressed using the keyword //​no_LIMIT_warnings//​. On termination of refinement, independent parameters that refined close to their limits are identified by the text "​_LIMIT_MIN_#"​ or "​_LIMIT_MAX_#"​ appended to the parameter value. The '#'​ corresponds to the limiting value. These warnings can be surpressed using the keyword //​no_LIMIT_warnings//​.
  
-**Table 3‑1** ​ ​Default ​parameter limits.+**Table 3‑1** ​ Default ​parameter limits.
  
 | **Parameter** | **min** | **max** | | **Parameter** | **min** | **max** |
Line 399: Line 398:
 | //a, b, c// | Max(1.5, 0.995 Val - 0.05) | 1.005 Val + 0.05 | | //a, b, c// | Max(1.5, 0.995 Val - 0.05) | 1.005 Val + 0.05 |
 | //al, be, ga// | Max(1.5, Val - 0.2) | Val + 0.2 | | //al, be, ga// | Max(1.5, Val - 0.2) | Val + 0.2 |
-| //scale// | 1e-11 |   ​+| //scale// | 1e-11 |   
-| //​sh_Cij_prm//​ | -2 Abs(Val) ​ - 0.1 | 2 Abs(Val) ​ + 0.1 |+| //​sh_Cij_prm//​ | -2 Abs(Val)  - 0.1 | 2 Abs(Val)  + 0.1 |
 | //occ// | 0 | 2 Val + 1 | | //occ// | 0 | 2 Val + 1 |
 | //beq// | Max(-10, Val-10) | Min(20, Val+10) | | //beq// | Max(-10, Val-10) | Min(20, Val+10) |
Line 406: Line 405:
 | //pv_lor, spv_l1, spv_l2// | 0 | 1 | | //pv_lor, spv_l1, spv_l2// | 0 | 1 |
 | //m1, m2// | 0.75 | 30 | | //m1, m2// | 0.75 | 30 |
-| //d// | 1e-6 |   ​|+| //d// | 1e-6 |   |
 | //xo// | Max(X1, Val - 40 [[#​k094|Peak_Calculation_Step]]) | Min(X2, Val + 40 [[#​k094|Peak_Calculation_Step]]) | | //xo// | Max(X1, Val - 40 [[#​k094|Peak_Calculation_Step]]) | Min(X2, Val + 40 [[#​k094|Peak_Calculation_Step]]) |
-| //I// | 1e-11 |   ​|+| //I// | 1e-11 |   |
 | //​z_matrix//​ radius | Max(0.5, Val .5) | 2 Val | | //​z_matrix//​ radius | Max(0.5, Val .5) | 2 Val |
 | //​z_matrix//​ angles | Val -- 90 | Val + 90 | | //​z_matrix//​ angles | Val -- 90 | Val + 90 |
Line 420: Line 419:
 | //​sample_length,​ receiving_slit_length,​ primary_soller_angle,​ secondary_soller_angle//​ ||| | //​sample_length,​ receiving_slit_length,​ primary_soller_angle,​ secondary_soller_angle//​ |||
  
-===== 3.9        ​Reserved parameter names =====+===== 3.9        ​Reserved parameter names =====
  
 Table 3‑2 and Table 3‑4 lists reserved parameter names that are interpreted internally. Table 3‑3 details dependices for certain reserved parameter names. An exception is thrown when a reserved parameter name is used for a User defined parameter name. An example use of reserved parameter names is as follows: Table 3‑2 and Table 3‑4 lists reserved parameter names that are interpreted internally. Table 3‑3 details dependices for certain reserved parameter names. An exception is thrown when a reserved parameter name is used for a User defined parameter name. An example use of reserved parameter names is as follows:
Line 428: Line 427:
 Here the weighting keyword is written in terms of the reserved parameter names Yobs, Ycalc and X. Here the weighting keyword is written in terms of the reserved parameter names Yobs, Ycalc and X.
  
-**Table 3‑2** ​ ​Reserved ​parameter names.+**Table 3‑2** ​ Reserved ​parameter names.
  
 | **Name** | **Description** | | **Name** | **Description** |
Line 439: Line 438:
 | Lpa, Lpb, Lpc | Corresponds to the a, b and c lattice parameters respectively. | | Lpa, Lpb, Lpc | Corresponds to the a, b and c lattice parameters respectively. |
 | Mi | An iterator used for multiplicities. See the PO macro of TOPAS.INC for an example of its use. | | Mi | An iterator used for multiplicities. See the PO macro of TOPAS.INC for an example of its use. |
-| Peak_Calculation_Step | Return the calculation step for phase peaks, ​see  ​//​[[#​k094|x_calculation_step]]//​. |+| Peak_Calculation_Step | Return the calculation step for phase peaks, ​see  //​[[#​k094|x_calculation_step]]//​. |
 | QR_Removed, QR_Num_Times_Consecutively_Small | Can be used in the //​[[#​k064|quick_refine_remove]]//​ equation. | | QR_Removed, QR_Num_Times_Consecutively_Small | Can be used in the //​[[#​k064|quick_refine_remove]]//​ equation. |
 | R, Ri: | The distance between two sites R and an iterator Ri. Used in the equation part of //​atomic_interaction//,​ //​box_interaction//​ and //​grs_interaction//​. | | R, Ri: | The distance between two sites R and an iterator Ri. Used in the equation part of //​atomic_interaction//,​ //​box_interaction//​ and //​grs_interaction//​. |
Line 450: Line 449:
 | Yobs, Ycalc, SigmaYobs | Yobs and Ycalc correspond to the observed and calculated data respectively. SigmaYobs corresponds to the estimated standard deviation in Yobs.; can be used in the weighting equation. | | Yobs, Ycalc, SigmaYobs | Yobs and Ycalc correspond to the observed and calculated data respectively. SigmaYobs corresponds to the estimated standard deviation in Yobs.; can be used in the weighting equation. |
  
-** **+** **
  
-**Table** **3****‑3** ​ ​Parameters ​that operate on phase peaks. Note, dependencies are not shown.+**Table** **3****‑3** ​ Parameters ​that operate on phase peaks. Note, dependencies are not shown.
  
 | **Keywords** **that can be a function of H, K, L M, Xo, Th and D_spacing.** ||| | **Keywords** **that can be a function of H, K, L M, Xo, Th and D_spacing.** |||
 | //​lor_fwhm//​ //​gauss_fwhm//​ //hat// //​one_on_x_conv//​ //​exp_conv_const//​ //​circles_conv//​ | //​stacked_hats_conv//​ //​user_defined_convolution//​ //​th2_offset//​ //​scale_pks//​ //h1, h2, m1, m2// //spv_h1, spv_h2, spv_l1, spv_l2// | //pv_lor, pv_fwhm// //​ymin_on_ymax//​ //la, lo, lh, lg// //​phase_out//​ //​scale_top_peak//​ //pk_xo// | | //​lor_fwhm//​ //​gauss_fwhm//​ //hat// //​one_on_x_conv//​ //​exp_conv_const//​ //​circles_conv//​ | //​stacked_hats_conv//​ //​user_defined_convolution//​ //​th2_offset//​ //​scale_pks//​ //h1, h2, m1, m2// //spv_h1, spv_h2, spv_l1, spv_l2// | //pv_lor, pv_fwhm// //​ymin_on_ymax//​ //la, lo, lh, lg// //​phase_out//​ //​scale_top_peak//​ //pk_xo// |
  
- + 
  
-**Table 3‑4** ​ ​Phase ​intensity reserved parameter names.+**Table 3‑4** ​ Phase ​intensity reserved parameter names.
  
 | **Name** | **Description** | | **Name** | **Description** |
 | A01, A11, B01, B11 | Used for reporting structure factor details as defined in equations (7‑5a) and (7‑5b), see the macros Out_F2_Details and Out_A01_A11_B01_B11. | | A01, A11, B01, B11 | Used for reporting structure factor details as defined in equations (7‑5a) and (7‑5b), see the macros Out_F2_Details and Out_A01_A11_B01_B11. |
-| Iobs_no_scale_pks Iobs_no_scale_pks_err | Returns the observed integrated intensity of a phase peak and its associated error without any //​scale_pks//​ applied. Iobs_no_scale_pks for a particular phase peak p is calculated using the Rietveld decomposition formulae, or, <​sub>​{{Technical_Reference%20V4-1_files:​image004.gif?​271x40}} </​sub><​sup>​ </​sup> ​    ​…see footnote <​sup>​1</​sup>​ where P<​sub>​x,​p</​sub>​ is the phase peak p calculated at the x-axis position x. The summation S<​sub>​x</​sub>​ extends over the x-axis extent of the peak p. A good fit to the observed data results in an Iobs_no_scale_pks being approximately equal to I_no_scale_pks. | +| Iobs_no_scale_pks Iobs_no_scale_pks_err | Returns the observed integrated intensity of a phase peak and its associated error without any //​scale_pks//​ applied. Iobs_no_scale_pks for a particular phase peak p is calculated using the Rietveld decomposition formulae, or, <​sub>​{{techref_files:​image004.gif?​271x40}} </​sub><​sup>​ </​sup>​     ​…see footnote <​sup>​1</​sup>​ where P<​sub>​x,​p</​sub>​ is the phase peak p calculated at the x-axis position x. The summation S<​sub>​x</​sub>​ extends over the x-axis extent of the peak p. A good fit to the observed data results in an Iobs_no_scale_pks being approximately equal to I_no_scale_pks. | 
-| I_no_scale_pks | The Integrated intensity without //​scale_pks//​ equations applied, or, I_no_scale_pks = Get(//​scale//​) //// .......see footnote <​sup>​1</​sup>​ | +| I_no_scale_pks | The Integrated intensity without //​scale_pks//​ equations applied, or, I_no_scale_pks = Get(//​scale//​) //// .......see footnote <​sup>​1</​sup>​ | 
-| I_after_scale_pks | The Integrated intensity with //​scale_pks//​ equations applied. I_after_scale_pks = Get(//​scale//​) Get(//​all_scale_pks//​) //I      ​…//see footnote <​sup>​1</​sup>​ Get(//​all_scale_pks//​) returns the cumulative value of all //​scale_pks//​ equations applied to a phase. |+| I_after_scale_pks | The Integrated intensity with //​scale_pks//​ equations applied. I_after_scale_pks = Get(//​scale//​) Get(//​all_scale_pks//​) //I      ​…//see footnote <​sup>​1</​sup>​ Get(//​all_scale_pks//​) returns the cumulative value of all //​scale_pks//​ equations applied to a phase. |
 | <​sup>​1</​sup>​) //I// corresponds to the //I// parameter for //hkl_Is//, //xo_Is// and //d_Is// phases or (M F<​sub>​obs</​sub><​sup>​2</​sup>​) for //str// phases. || | <​sup>​1</​sup>​) //I// corresponds to the //I// parameter for //hkl_Is//, //xo_Is// and //d_Is// phases or (M F<​sub>​obs</​sub><​sup>​2</​sup>​) for //str// phases. ||
  
-===== 3.10  ​Val and Change reserved parameter names =====+===== 3.10  Val and Change reserved parameter names =====
  
 Val is a reserved parameter name corresponding to the #value of a parameter during refinement. Change is a reserved parameter name which corresponds to the change in a refined parameter at the end of an iteration. It is determined as follows: Val is a reserved parameter name corresponding to the #value of a parameter during refinement. Change is a reserved parameter name which corresponds to the change in a refined parameter at the end of an iteration. It is determined as follows:
Line 492: Line 491:
 x @ 0.1234 update=Val + 0.1 ArcTan(Change 10); min=Val-.2; max=Val+.2; x @ 0.1234 update=Val + 0.1 ArcTan(Change 10); min=Val-.2; max=Val+.2;
  
-====== ​4             Equation Operators and Functions ======+====== ​4             ​Equation Operators and Functions ======
  
 **Table 4‑1**: Operators and functions supported in equations (case sensitive). In addition equations can be functions of User defined parameter names. **Table 4‑1**: Operators and functions supported in equations (case sensitive). In addition equations can be functions of User defined parameter names.
  
 | **Classes** | **Symbols / Functions** | **Remarks** | | **Classes** | **Symbols / Functions** | **Remarks** |
-| Parentheses | () or [] |   ​+| Parentheses | () or [] |   
-| Arithmetic | + |   ​+| Arithmetic | + |   
-  ​| - |   ​+  | - |   
-  ​| * | The multiply sign is optional. (x*y = x y) | +  | * | The multiply sign is optional. (x*y = x y) | 
-  ​| / |   ​+  | / |   
-  ​| %%^%% | x%%^%%y, Calculates x to the power of y. Precedence: ​ \\       ​x%%^%%y%%^%%z = (x%%^%%y)%%^%%z  ​\\       ​x%%^%%y*z = (x%%^%%y)*z  ​\\       ​x%%^%%y/z = (x%%^%%y)/z |+  | %%^%% | x%%^%%y, Calculates x to the power of y. Precedence:  \\       ​x%%^%%y%%^%%z = (x%%^%%y)%%^%%z  \\       ​x%%^%%y*z = (x%%^%%y)*z  \\       ​x%%^%%y/z = (x%%^%%y)/z |
 | Conditional | a == b | Returns 1 if a = b | | Conditional | a == b | Returns 1 if a = b |
-  ​| a < b | Returns 1 if a < b | +  | a < b | Returns 1 if a < b | 
-  ​| a <= b | Returns 1 if a <= b | +  | a <= b | Returns 1 if a <= b | 
-  ​| a > b | Returns 1 if a > b | +  | a > b | Returns 1 if a > b | 
-  ​| a >= b | Returns 1 if a >= b | +  | a >= b | Returns 1 if a >= b | 
-  ​| And(a, b, ...) | Returns 1 if all arguments evaluates to non-zero values. | +  | And(a, b, ...) | Returns 1 if all arguments evaluates to non-zero values. | 
-  ​| Or(a, b, ...) | Returns true if one arguments evaluates to non-zero |+  | Or(a, b, ...) | Returns true if one arguments evaluates to non-zero |
 | Mathematical | Sin(x) | Returns the sine of x | | Mathematical | Sin(x) | Returns the sine of x |
-  ​| Cos(x) | Returns the cosine of x | +  | Cos(x) | Returns the cosine of x | 
-  ​| Tan(x) | Returns the tangent of x | +  | Tan(x) | Returns the tangent of x | 
-  ​| ArcSin(x) | Returns the arc sine of x (-1 <= x <= 1) | +  | ArcSin(x) | Returns the arc sine of x (-1 <= x <= 1) | 
-  ​| ArcCos(x) | Returns the arc cos of x (-1 <= x <= 1) | +  | ArcCos(x) | Returns the arc cos of x (-1 <= x <= 1) | 
-  ​| ArcTan(x) | Returns the arc tangent of x | +  | ArcTan(x) | Returns the arc tangent of x | 
-  ​| Exp(x) | Returns the exponential e to the x | +  | Exp(x) | Returns the exponential e to the x | 
-  ​| Ln(x) | Returns the natural logarithm of x | +  | Ln(x) | Returns the natural logarithm of x | 
-  ​| Sqrt(x) | Returns the positive square root |+  | Sqrt(x) | Returns the positive square root |
 | Special | Sum(returns summation_eqn,​ initializer,​ conditional_test,​ increment_eqn) || | Special | Sum(returns summation_eqn,​ initializer,​ conditional_test,​ increment_eqn) ||
-  ​| If(conditional_test,​ return true_eqn, return false_eqn) || +  | If(conditional_test,​ return true_eqn, return false_eqn) || 
-  ​| For(Mi = 0, Mi < M, Mi = Mi+1 ,....) || +  | For(Mi = 0, Mi < M, Mi = Mi+1 ,....) || 
-  ​| Get($keyword) | Gets the parameter associated with $keyword |+  | Get($keyword) | Gets the parameter associated with $keyword |
 | Miscellaneous | Min(a,​b,​c...) | Returns the min of all arguments | | Miscellaneous | Min(a,​b,​c...) | Returns the min of all arguments |
-  ​| Max(a,​b,​c...) | Returns the max of all arguments | +  | Max(a,​b,​c...) | Returns the max of all arguments | 
-  ​| Abs(x) | Returns the absolute value of x | +  | Abs(x) | Returns the absolute value of x | 
-  ​| Mod(x, y) | Returns the modulus of x/y. Mod(x, 0) returns 0 | +  | Mod(x, y) | Returns the modulus of x/y. Mod(x, 0) returns 0 | 
-  ​| Rand(x1, x2) | Returns a random number between x1 and x2 | +  | Rand(x1, x2) | Returns a random number between x1 and x2 | 
-  ​| Sign(x) | Returns the sign of x, or zero if x = 0 | +  | Sign(x) | Returns the sign of x, or zero if x = 0 | 
-  ​| Break | Can be used to terminate loops implied by the equations atomic_interaction,​ box_interaction and grs_interaction. | +  | Break | Can be used to terminate loops implied by the equations atomic_interaction,​ box_interaction and grs_interaction. | 
-  ​| Break_Cycle | Can be used to terminate a [[#​k144|refinement cycle]]. For example, if a particular penalty is greater than a particular value then the refinement cycle can be terminated as follows: atomic_interaction ai = (R-1.3)%%^%%2;​ … penalty = If( ai > 5, Break_Cycle,​ 0); |+  | Break_Cycle | Can be used to terminate a [[#​k144|refinement cycle]]. For example, if a particular penalty is greater than a particular value then the refinement cycle can be terminated as follows: atomic_interaction ai = (R-1.3)%%^%%2;​ … penalty = If( ai > 5, Break_Cycle,​ 0); |
  
 In addition the following functions are implemented:​ In addition the following functions are implemented:​
Line 568: Line 567:
 **[[#​k154|Sites_Geometry_Dihedral_Angle($Name)]]** **[[#​k154|Sites_Geometry_Dihedral_Angle($Name)]]**
  
-===== 4.1        ​'​If'​ and nested '​if'​ statements =====+===== 4.1        ​'​If'​ and nested '​if'​ statements =====
  
- '​Sum'​ and '​If'​ statements can be used in parameter equations, for example:+ '​Sum'​ and '​If'​ statements can be used in parameter equations, for example:
  
 str... str...
Line 612: Line 611:
 ENDIF ENDIF
  
-===== 4.2        ​Floating point exceptions =====+===== 4.2        ​Floating point exceptions =====
  
 An exception is thrown when an invalid floating point operation is encountered,​ examples are: An exception is thrown when an invalid floating point operation is encountered,​ examples are:
Line 632: Line 631:
 //​min/////​max//​ equations, or Min/Max functions or ‘If’ statements can be used to avoid invalid floating point operations. Equations can also be manipulated to yield valid floating point operations, for example, Exp(-1000) can be used in place of 1/​Exp(1000). //​min/////​max//​ equations, or Min/Max functions or ‘If’ statements can be used to avoid invalid floating point operations. Equations can also be manipulated to yield valid floating point operations, for example, Exp(-1000) can be used in place of 1/​Exp(1000).
  
-====== ​5             The Minimization Routines ======+====== ​5             ​The Minimization Routines ======
  
-The Newton-Raphson non-linear least squares method is used by default with the Marquardt method (1963) included for stability. A Bound Constrained Conjugate Gradient (BCCG) method (Coelho, 2005) incorporating //​min/////​max//​ limits is used for solving the normal equations. The objective function <​sub>​{{Technical_Reference%20V4-1_files:​image002.gif?​20x23}}</​sub> ​is written as:+The Newton-Raphson non-linear least squares method is used by default with the Marquardt method (1963) included for stability. A Bound Constrained Conjugate Gradient (BCCG) method (Coelho, 2005) incorporating //​min/////​max//​ limits is used for solving the normal equations. The objective function <​sub>​{{techref_files:​image002.gif?​20x23}}</​sub>​ is written as:
  
-| <​sub>​{{Technical_Reference%20V4-1_files:​image006.gif?​76x23}}</​sub>​ | (5‑1) | +| <​sub>​{{techref_files:​image006.gif?​76x23}}</​sub>​ | (5‑1) | 
-| where <​sub>​{{Technical_Reference%20V4-1_files:​image008.gif?​180x45}}</​sub> ​and  ​<​sub>​{{Technical_Reference%20V4-1_files:​image010.gif?​135x48}}</​sub>​ | (5‑2) | +| where <​sub>​{{techref_files:​image008.gif?​180x45}}</​sub>​ and  ​<​sub>​{{techref_files:​image010.gif?​135x48}}</​sub>​ | (5‑2) | 
-| where K =<​sub>​{{Technical_Reference%20V4-1_files:​image012.gif?​93x45}}</​sub>​ | (5‑3) |+| where K =<​sub>​{{techref_files:​image012.gif?​93x45}}</​sub>​ | (5‑3) |
  
-Y<​sub>​o,​m</​sub>​ and Y<​sub>​c,​m</​sub>​ are the observed and calculated data respectively at data point m, M the number of data points, w<​sub>​m</​sub>​ the weighting given to data point m which for counting statistics is given by w<​sub>​m</​sub>​=1///​s//​(Y<​sub>​o,​m</​sub>​)<​sup>​2</​sup>​ where //​s//​(Y<​sub>​o,​m</​sub>​) is the error in Y<​sub>​o,​m</​sub>,​ P<​sub>​p</​sub>​ are penalty functions, N<​sub>​p</​sub>​ the number of penalty functions and K<​sub>​1</​sub>​ and K<​sub>​2,​p</​sub>​ are weights applied to the penalty functions and are described below. K normalizes <​sub>​{{Technical_Reference%20V4-1_files:​image002.gif?​20x23}}</​sub> ​such that the default //​[[#​k013|chi2_convergence_criteria]]//​ value of 0.001 is sufficient for routine refinements. Typical //​chi2_convergence_criteria//​ values for structure determination range from 0.01 to 0.1. Penalty functions are minimized when there are no observed data Y<​sub>​o</​sub>;​ see example onlypena.inp.+Y<​sub>​o,​m</​sub>​ and Y<​sub>​c,​m</​sub>​ are the observed and calculated data respectively at data point m, M the number of data points, w<​sub>​m</​sub>​ the weighting given to data point m which for counting statistics is given by w<​sub>​m</​sub>​=1///​s//​(Y<​sub>​o,​m</​sub>​)<​sup>​2</​sup>​ where //​s//​(Y<​sub>​o,​m</​sub>​) is the error in Y<​sub>​o,​m</​sub>,​ P<​sub>​p</​sub>​ are penalty functions, N<​sub>​p</​sub>​ the number of penalty functions and K<​sub>​1</​sub>​ and K<​sub>​2,​p</​sub>​ are weights applied to the penalty functions and are described below. K normalizes <​sub>​{{techref_files:​image002.gif?​20x23}}</​sub>​ such ​that the default //​[[#​k013|chi2_convergence_criteria]]//​ value of 0.001 is sufficient for routine refinements. Typical //​chi2_convergence_criteria//​ values for structure determination range from 0.01 to 0.1. Penalty functions are minimized when there are no observed data Y<​sub>​o</​sub>;​ see example onlypena.inp.
  
 The normal equations are generated by the usual expansion of Y<​sub>​c,​m</​sub>​ to a first order Taylor series around the parameter vector **p** ignoring second order terms. The size of **p** corresponds to the number of independent parameters N. The penalty functions are expanded to a second order Taylor series around the parameter vector **p**. The resulting normal equations are: The normal equations are generated by the usual expansion of Y<​sub>​c,​m</​sub>​ to a first order Taylor series around the parameter vector **p** ignoring second order terms. The size of **p** corresponds to the number of independent parameters N. The penalty functions are expanded to a second order Taylor series around the parameter vector **p**. The resulting normal equations are:
  
 | **A** D**p** = **Y** | (5‑4) | | **A** D**p** = **Y** | (5‑4) |
-| where **A** = **A**<​sub>​1</​sub>​ + **A**<​sub>​2</​sub>​ |   ​|+| where **A** = **A**<​sub>​1</​sub>​ + **A**<​sub>​2</​sub>​ |   |
  
- + 
  
-| <​sub>​{{Technical_Reference%20V4-1_files:​image014.gif?​315x148}}</​sub>​ | (5‑5) |+| <​sub>​{{techref_files:​image014.gif?​315x148}}</​sub>​ | (5‑5) |
  
-The Taylor coefficients D**p** correspond to changes in the parameters **p**. Eq. (5‑4) represents a linear set of equations in D**p** that are solved for each iteration of refinement. The calculation of the off diagonal terms in **A**<​sub>​2</​sub>​ (the second derivatives of the penalty functions) are tedious and preliminary investigations have indicated that their inclusion does not significantly improve convergence of<​sub>​{{Technical_Reference%20V4-1_files:​image002.gif?​20x23}}</​sub>;​ A<​sub>​2,​ij</​sub>​ for i¹j are therefore set to zero.+The Taylor coefficients D**p** correspond to changes in the parameters **p**. Eq. (5‑4) represents a linear set of equations in D**p** that are solved for each iteration of refinement. The calculation of the off diagonal terms in **A**<​sub>​2</​sub>​ (the second derivatives of the penalty functions) are tedious and preliminary investigations have indicated that their inclusion does not significantly improve convergence of<​sub>​{{techref_files:​image002.gif?​20x23}}</​sub>;​ A<​sub>​2,​ij</​sub>​ for i¹j are therefore set to zero.
  
-The penalty weighting K<​sub>​2,​i</​sub>​ is used to give equal weights to the sum of the inverse error terms in the parameters //​s//<​sub>​1</​sub>​(//​p//<​sub>​i</​sub>​)<​sup>​2</​sup>​ and //​s//<​sub>​2</​sub>​(//​p//<​sub>​i</​sub>​)<​sup>​2</​sup>​ calculated from <​sub>​{{Technical_Reference%20V4-1_files:​image016.gif?​20x23}}</​sub> ​and <​sub>​{{Technical_Reference%20V4-1_files:​image018.gif?​20x23}}</​sub> ​respectively. Neglecting the off diagonal terms gives //​s//<​sub>​1</​sub>​(//​p//<​sub>​i</​sub>​)<​sup>​2</​sup>​=1/​A<​sub>​1,​ii</​sub>​ and //​s//<​sub>​2</​sub>​(//​p//<​sub>​i</​sub>​)<​sup>​2</​sup>​=1/​B<​sub>​2,​ii</​sub>​ and thus K<​sub>​2,​i</​sub>​ is written as shown in Eq. (5‑6).+The penalty weighting K<​sub>​2,​i</​sub>​ is used to give equal weights to the sum of the inverse error terms in the parameters //​s//<​sub>​1</​sub>​(//​p//<​sub>​i</​sub>​)<​sup>​2</​sup>​ and //​s//<​sub>​2</​sub>​(//​p//<​sub>​i</​sub>​)<​sup>​2</​sup>​ calculated from <​sub>​{{techref_files:​image016.gif?​20x23}}</​sub>​ and <​sub>​{{techref_files:​image018.gif?​20x23}}</​sub>​ respectively. Neglecting the off diagonal terms gives //​s//<​sub>​1</​sub>​(//​p//<​sub>​i</​sub>​)<​sup>​2</​sup>​=1/​A<​sub>​1,​ii</​sub>​ and //​s//<​sub>​2</​sub>​(//​p//<​sub>​i</​sub>​)<​sup>​2</​sup>​=1/​B<​sub>​2,​ii</​sub>​ and thus K<​sub>​2,​i</​sub>​ is written as shown in Eq. (5‑6).
  
-| <​sub>​{{Technical_Reference%20V4-1_files:​image020.gif?​296x48}}</​sub>​ | (5‑6) |+| <​sub>​{{techref_files:​image020.gif?​296x48}}</​sub>​ | (5‑6) |
  
-The penalty weighting K<​sub>​1</​sub>​ determines the weight given to the penalties <​sub>​{{Technical_Reference%20V4-1_files:​image018.gif?​20x23}}</​sub> ​relative ​to <​sub>​{{Technical_Reference%20V4-1_files:​image016.gif?​20x23}}</​sub>,​ typical values range from 0.2 to 2.+The penalty weighting K<​sub>​1</​sub>​ determines the weight given to the penalties <​sub>​{{techref_files:​image018.gif?​20x23}}</​sub>​ relative ​to <​sub>​{{techref_files:​image016.gif?​20x23}}</​sub>,​ typical values range from 0.2 to 2.
  
-===== 5.1        ​The Marquardt method =====+===== 5.1        ​The Marquardt method =====
  
-The Marquardt (1963) method applies a scaling factor to the diagonal elements of the **A** matrix when the solution to the normal equations of Eq. (5‑4) fails to reduce<​sub>​{{Technical_Reference%20V4-1_files:​image002.gif?​20x23}}</​sub>,​ or,+The Marquardt (1963) method applies a scaling factor to the diagonal elements of the **A** matrix when the solution to the normal equations of Eq. (5‑4) fails to reduce<​sub>​{{techref_files:​image002.gif?​20x23}}</​sub>,​ or,
  
-| A<​sub>​ii,​scaled</​sub>​ = A<​sub>​ii</​sub>​ (1+h) |   ​|+| A<​sub>​ii,​scaled</​sub>​ = A<​sub>​ii</​sub>​ (1+h) |   |
  
-where h is the Marquardt constant. After applying the Marquardt constant the normal equations are again solved and <​sub>​{{Technical_Reference%20V4-1_files:​image002.gif?​20x23}}</​sub> ​recalculated; this scaling process is repeated until <​sub>​{{Technical_Reference%20V4-1_files:​image002.gif?​20x23}}</​sub> ​reduces. Repeated failure results in a very large Marquardt constant and taken to the limit the off diagonal terms can be ignored and the solution to the normal equations can be approximated as: +where h is the Marquardt constant. After applying the Marquardt constant the normal equations are again solved and <​sub>​{{techref_files:​image002.gif?​20x23}}</​sub>​ recalculated; this scaling process is repeated until <​sub>​{{techref_files:​image002.gif?​20x23}}</​sub>​ reduces. Repeated failure results in a very large Marquardt constant and taken to the limit the off diagonal terms can be ignored and the solution to the normal equations can be approximated as: 
  
 | Dp<​sub>​i</​sub>​ = Y<​sub>​i</​sub>​ / (A<​sub>​ii</​sub>​ (1 + h)) | (5‑7) | | Dp<​sub>​i</​sub>​ = Y<​sub>​i</​sub>​ / (A<​sub>​ii</​sub>​ (1 + h)) | (5‑7) |
Line 671: Line 670:
 The Marquardt method is used when the refinement comprises observed data Y<​sub>​o</​sub>​. The keyword //​no_normal_equations//​ prevents the use of the Marquardt method. The Marquardt method is used when the refinement comprises observed data Y<​sub>​o</​sub>​. The keyword //​no_normal_equations//​ prevents the use of the Marquardt method.
  
-The Marquardt constant h is automatically determined each iteration. Its determination is based on the actual change in <​sub>​{{Technical_Reference%20V4-1_files:​image002.gif?​20x23}}</​sub>​and the expected change in <​sub>​{{Technical_Reference%20V4-1_files:​image002.gif?​20x23}}</​sub>​.+The Marquardt constant h is automatically determined each iteration. Its determination is based on the actual change in <​sub>​{{techref_files:​image002.gif?​20x23}}</​sub>​and the expected change in <​sub>​{{techref_files:​image002.gif?​20x23}}</​sub>​.
  
-===== 5.2        ​Approximating the A matrix - the BFGS method =====+===== 5.2        ​Approximating the A matrix - the BFGS method =====
  
 The //​approximate_A//​ keyword can be used to approximate the **A** matrix, Eq. (5‑4), without the need to calculate the **A** matrix dot products. The approximation is based on the BFGS method (Broyden, 1970; Fletcher, 1970; Goldfarb, 1970; Shanno, 1970). BCCG is used by default for solving the normal equations, alternatively,​ LU-decomposition can be used if //use_LU// is defined and the **A** matrix is not sparse. Note, that LU-decomposition requires the full A matrix and thus it may be too memory intensive for problems with 10s of thousands of parameters. LU-decomposition can also be too slow when the number of parameters is greater than about one thousand parameters. The //​approximate_A//​ keyword can be used to approximate the **A** matrix, Eq. (5‑4), without the need to calculate the **A** matrix dot products. The approximation is based on the BFGS method (Broyden, 1970; Fletcher, 1970; Goldfarb, 1970; Shanno, 1970). BCCG is used by default for solving the normal equations, alternatively,​ LU-decomposition can be used if //use_LU// is defined and the **A** matrix is not sparse. Note, that LU-decomposition requires the full A matrix and thus it may be too memory intensive for problems with 10s of thousands of parameters. LU-decomposition can also be too slow when the number of parameters is greater than about one thousand parameters.
Line 679: Line 678:
 Approximating **A** is useful when the calculation of the **A** matrix dot products is proving too expensive. When penalties dominates a refinement then the use of //​approximate_A//​ may also improve convergence. //​approximate_A//​ cannot be used with //​line_min//​ or //​use_extrapolation//​. Approximating **A** is useful when the calculation of the **A** matrix dot products is proving too expensive. When penalties dominates a refinement then the use of //​approximate_A//​ may also improve convergence. //​approximate_A//​ cannot be used with //​line_min//​ or //​use_extrapolation//​.
  
-The single crystal refinement examples AE14-APPROX-A.INP and AE1-APPROX-A.INP are cases where the use of //​approximate_A//​ achieves convergence is less time than with the calculated **A** matrix. ​+The single crystal refinement examples AE14-APPROX-A.INP and AE1-APPROX-A.INP are cases where the use of //​approximate_A//​ achieves convergence is less time than with the calculated **A** matrix. 
  
 When using //​approximate_A//​ the A matrix can be made sparse by defining [[#​k158|A_matrix_memory_allowed_in_Mbytes]] and/or [[#​k158|A_matrix_elements_tollerance]]. This allows for the refinement of a very large number of parameters. When using //​approximate_A//​ the A matrix can be made sparse by defining [[#​k158|A_matrix_memory_allowed_in_Mbytes]] and/or [[#​k158|A_matrix_elements_tollerance]]. This allows for the refinement of a very large number of parameters.
  
-===== 5.3        ​Line minimization and Parameter extrapolation =====+===== 5.3        ​Line minimization and Parameter extrapolation =====
  
-Line minimization better known as the steepest decent method is invoked with the keyword //​line_min.// ​It  ​uses a direction in parameter space given by Dp<​sub>​i</​sub>​=Y<​sub>​i</​sub>/​A<​sub>​ii</​sub>​ to minimize on <​sub>​{{Technical_Reference%20V4-1_files:​image002.gif?​20x23}}</​sub>​(p+lDp) by adjusting l.+Line minimization better known as the steepest decent method is invoked with the keyword //​line_min.// ​It  uses a direction in parameter space given by Dp<​sub>​i</​sub>​=Y<​sub>​i</​sub>/​A<​sub>​ii</​sub>​ to minimize on <​sub>​{{techref_files:​image002.gif?​20x23}}</​sub>​(p+lDp) by adjusting l.
  
-Parameter Extrapolation,​ invoked with the keyword //​use_extrapolation//​ uses parabolic extrapolation of the parameters as a function of iteration, or, l is adjusted such that <​sub>​{{Technical_Reference%20V4-1_files:​image002.gif?​20x23}}</​sub>​(**a**l<​sup>​2</​sup>​+**b**l+**c**) is minimized where for a particular parameter p<​sub>​i</​sub>​ at iteration k we have a<​sub>​i</​sub>​=(y<​sub>​1</​sub>​-2y<​sub>​2</​sub>​+y<​sub>​3</​sub>​)/​2,​ b<​sub>​i</​sub>​=(y<​sub>​3</​sub>​-y<​sub>​1</​sub>​)/​2 and c<​sub>​i</​sub>​=y<​sub>​2</​sub>​ where y<​sub>​1</​sub>​=(p<​sub>​i,​k-5</​sub>​+p<​sub>​i,​k-4</​sub>​)/​2,​ y<​sub>​2</​sub>​=(p<​sub>​i,​k-3</​sub>​+p<​sub>​i,​k-2</​sub>​)/​2 and y<​sub>​3</​sub>​ = (p<​sub>​i,​k-1</​sub>​+p<​sub>​i,​k-0</​sub>​)/​2. Parameter Extrapolation encompasses the last six sets of parameter values. In cases where both <​sub>​{{Technical_Reference%20V4-1_files:​image016.gif?​20x23}}</​sub> ​and <​sub>​{{Technical_Reference%20V4-1_files:​image018.gif?​20x23}}</​sub> ​exists ​then Parameter Extrapolation reduces possible oscillatory behaviour in <​sub>​{{Technical_Reference%20V4-1_files:​image002.gif?​20x23}}</​sub>​. Parameter extrapolation when used with Line Minimization can increase the rate of convergence when refining on penalties only.+Parameter Extrapolation,​ invoked with the keyword //​use_extrapolation//​ uses parabolic extrapolation of the parameters as a function of iteration, or, l is adjusted such that <​sub>​{{techref_files:​image002.gif?​20x23}}</​sub>​(**a**l<​sup>​2</​sup>​+**b**l+**c**) is minimized where for a particular parameter p<​sub>​i</​sub>​ at iteration k we have a<​sub>​i</​sub>​=(y<​sub>​1</​sub>​-2y<​sub>​2</​sub>​+y<​sub>​3</​sub>​)/​2,​ b<​sub>​i</​sub>​=(y<​sub>​3</​sub>​-y<​sub>​1</​sub>​)/​2 and c<​sub>​i</​sub>​=y<​sub>​2</​sub>​ where y<​sub>​1</​sub>​=(p<​sub>​i,​k-5</​sub>​+p<​sub>​i,​k-4</​sub>​)/​2,​ y<​sub>​2</​sub>​=(p<​sub>​i,​k-3</​sub>​+p<​sub>​i,​k-2</​sub>​)/​2 and y<​sub>​3</​sub>​ = (p<​sub>​i,​k-1</​sub>​+p<​sub>​i,​k-0</​sub>​)/​2. Parameter Extrapolation encompasses the last six sets of parameter values. In cases where both <​sub>​{{techref_files:​image016.gif?​20x23}}</​sub>​ and <​sub>​{{techref_files:​image018.gif?​20x23}}</​sub>​ exists ​then Parameter Extrapolation reduces possible oscillatory behaviour in <​sub>​{{techref_files:​image002.gif?​20x23}}</​sub>​. Parameter extrapolation when used with Line Minimization can increase the rate of convergence when refining on penalties only.
  
 Line minimization and Parameter Extrapolation have relatively small memory foot prints and thus can be useful when the **A** matrix consumes too much memory. Alternatively the //​approximate_A//​ keyword can be used. Line minimization and Parameter Extrapolation have relatively small memory foot prints and thus can be useful when the **A** matrix consumes too much memory. Alternatively the //​approximate_A//​ keyword can be used.
Line 693: Line 692:
 Line minimization with the full **A** matrix calculation (no //​approximate_A//​ defined) can increase the rate of convergence on problems like Pawley refinement. Line minimization with the full **A** matrix calculation (no //​approximate_A//​ defined) can increase the rate of convergence on problems like Pawley refinement.
  
-===== 5.4        ​Minimizing on penalties only =====+===== 5.4        ​Minimizing on penalties only =====
  
 When there are no observed data or when //​only_penalties//​ is defined then by default the BFGS method is used, see examples rosenbrock-10.inp and HOCK.INP. For penalties only the BFGS method typically converges faster than //​line_min/////​use_extrapolation//​ however for ‘penalties only’ it can be overridden with the use of //​line_min//​. When there are no observed data or when //​only_penalties//​ is defined then by default the BFGS method is used, see examples rosenbrock-10.inp and HOCK.INP. For penalties only the BFGS method typically converges faster than //​line_min/////​use_extrapolation//​ however for ‘penalties only’ it can be overridden with the use of //​line_min//​.
  
-===== 5.5        ​Summary, Iteration and Refinement Cycle =====+===== 5.5        ​Summary, Iteration and Refinement Cycle =====
  
 **Table 5‑1** shows various keyword usages for typical refinement problems. The term “refinement cycle” is used to describe a single convergence. The reserved parameter Cycle returns the current refinement cycle with counting starting at zero. The reserved parameter Cycle_Iter returns the current iterations within a Cycle with counting starting at zero.  **Table 5‑1** shows various keyword usages for typical refinement problems. The term “refinement cycle” is used to describe a single convergence. The reserved parameter Cycle returns the current refinement cycle with counting starting at zero. The reserved parameter Cycle_Iter returns the current iterations within a Cycle with counting starting at zero. 
  
- + 
  
-| **Table 5‑1** ​ Keyword sequences for various refinement types |||+| **Table 5‑1**  Keyword sequences for various refinement types |||
 | **Refinement type** | **Keywords to use** | **Comments** | | **Refinement type** | **Keywords to use** | **Comments** |
-| Rietveld refinement No penalties |   ​| Marquardt refinement. **A** matrix calculation. |+| Rietveld refinement No penalties |   | Marquardt refinement. **A** matrix calculation. |
 | Rietveld refinement with a moderate number of penalties. | //​line_min//​ (Maybe) | Line minimization used if //​line_min//​. Marquardt refinement. **A** matrix calculation. | | Rietveld refinement with a moderate number of penalties. | //​line_min//​ (Maybe) | Line minimization used if //​line_min//​. Marquardt refinement. **A** matrix calculation. |
 | Rietveld refinement dominated by penalties | //​approximate_A//​ | BFGS method of refinement. **A** matrix approximation. | | Rietveld refinement dominated by penalties | //​approximate_A//​ | BFGS method of refinement. **A** matrix approximation. |
 | Pawley refinement | //​line_min//​ | Line minimization. Marquardt refinement. **A** matrix calculation. | | Pawley refinement | //​line_min//​ | Line minimization. Marquardt refinement. **A** matrix calculation. |
-| Penalties only |   ​| BFGS method of refinement. **A** matrix approximation. |+| Penalties only |   | BFGS method of refinement. **A** matrix approximation. |
 | Refinements with a large number of parameters | //​approximate_A//​ | BFGS method of refinement. **A** matrix approximation. | | Refinements with a large number of parameters | //​approximate_A//​ | BFGS method of refinement. **A** matrix approximation. |
  
- + 
  
-===== 5.6        ​quick_refine and computational issues =====+===== 5.6        ​quick_refine and computational issues =====
  
 The computationally dominant factor of calculating Eq. (5‑5) is problem dependent. For Rietveld refinement with a moderate number of parameters then the calculation of the peak parameter derivatives may well be the most expensive. On the other hand for Rietveld refinement with a large number of structural parameters and data points then the calculation of the A<​sub>​1,​ij</​sub>​ dot products would be the dominant factor, where, the number of operations scale by M(N<​sup>​2</​sup>​+N)/​2. Before the development of the BCCG routine (Coelho, 2005), the solution to the normal equations, Eq. (5‑4), was also very expensive. ​ The computationally dominant factor of calculating Eq. (5‑5) is problem dependent. For Rietveld refinement with a moderate number of parameters then the calculation of the peak parameter derivatives may well be the most expensive. On the other hand for Rietveld refinement with a large number of structural parameters and data points then the calculation of the A<​sub>​1,​ij</​sub>​ dot products would be the dominant factor, where, the number of operations scale by M(N<​sup>​2</​sup>​+N)/​2. Before the development of the BCCG routine (Coelho, 2005), the solution to the normal equations, Eq. (5‑4), was also very expensive. ​
Line 722: Line 721:
 The //​[[#​k064|quick_refine]]//​ keyword removes parameters during a [[#​k144|refinement cycle]] thus shrinking the size of the **A** matrix by reducing N. Parameters are removed if the condition defined in Eq. (5‑8) is met for three consecutive iterations. The //​[[#​k064|quick_refine]]//​ keyword removes parameters during a [[#​k144|refinement cycle]] thus shrinking the size of the **A** matrix by reducing N. Parameters are removed if the condition defined in Eq. (5‑8) is met for three consecutive iterations.
  
-| <​sub>​{{Technical_Reference%20V4-1_files:​image022.gif?​243x23}}</​sub>​ | (5‑8) |+| <​sub>​{{techref_files:​image022.gif?​243x23}}</​sub>​ | (5‑8) |
  
 Alternatively,​ parameters can be removed or reinstated during a refinement cycle using //​[[#​k064|quick_refine_remove]]//​. This keyword provides a means of performing block refining. If //​quick_refine_remove//​ is not defined then all parameters are reinstated at the start of refinement cycles. Alternatively,​ parameters can be removed or reinstated during a refinement cycle using //​[[#​k064|quick_refine_remove]]//​. This keyword provides a means of performing block refining. If //​quick_refine_remove//​ is not defined then all parameters are reinstated at the start of refinement cycles.
  
-===== 5.7        ​Auto_T and randomize_on_errors =====+===== 5.7        ​Auto_T and randomize_on_errors =====
  
 It is sometimes difficult to formulate optimum //​[[#​x000|val_on_continue]]//​ functions for simulated annealing. This is especially true in structure solution using rigid bodies where optimum randomization of the rigid body parameters can be difficult to ascertain. //​randomize_on_errors//​ is a means of automatically randomizing parameters based on the approximate errors in the parameters as given in Eq. (5‑9), where T is the current temperature and K is as defined in Eq. (5‑3). It is sometimes difficult to formulate optimum //​[[#​x000|val_on_continue]]//​ functions for simulated annealing. This is especially true in structure solution using rigid bodies where optimum randomization of the rigid body parameters can be difficult to ascertain. //​randomize_on_errors//​ is a means of automatically randomizing parameters based on the approximate errors in the parameters as given in Eq. (5‑9), where T is the current temperature and K is as defined in Eq. (5‑3).
  
-| <​sub>​{{Technical_Reference%20V4-1_files:​image024.gif?​285x27}}</​sub>​ | (5‑9) |+| <​sub>​{{techref_files:​image024.gif?​285x27}}</​sub>​ | (5‑9) |
  
 Q is a scaling factor determined such that convergence to a previous parameter configuration occurs 7.5% of the time on average. When //​randomize_on_errors//​ is used the magnitude of the //​temperature//​(s) is not of significance but the relative variation in //​temperature//​(s) are. Q is a scaling factor determined such that convergence to a previous parameter configuration occurs 7.5% of the time on average. When //​randomize_on_errors//​ is used the magnitude of the //​temperature//​(s) is not of significance but the relative variation in //​temperature//​(s) are.
Line 738: Line 737:
 Note, when //​val_on_continue//​ is defined then the corresponding parameter is not randomized according to //​randomize_on_errors//​. Note, when //​val_on_continue//​ is defined then the corresponding parameter is not randomized according to //​randomize_on_errors//​.
  
-===== 5.8        ​Criteria of fit =====+===== 5.8        ​Criteria of fit =====
  
 Table 5‑2: Criteria of fit (see Young 1993 for details). //​Y<​sub>​o,​m</​sub>//​ and //​Y<​sub>​c,​m</​sub>//​ are the observed and calculated data respectively at data point //m//, //​Bkg<​sub>​m</​sub>//​ the background at data point //m//, //M// the number of data points, //P// the number of parameters, //​w<​sub>​m</​sub>//​ the weighting given to data point //m// which for counting statistics is given by //​w<​sub>​m</​sub>​=1/////​s//​(//​Y<​sub>​o,​m</​sub>//​)<​sup>​2</​sup>​ where //​s//​(//​Y<​sub>​o,​m</​sub>//​) is the error in //​Y<​sub>​o,​m</​sub>//,​ and //​I<​sub>"​o",​k</​sub>//​ and //​I<​sub>​c,​k</​sub>//​ the "​observed"​ and calculated intensities of the //​k//<​sup>​th</​sup>​ reflection. Table 5‑2: Criteria of fit (see Young 1993 for details). //​Y<​sub>​o,​m</​sub>//​ and //​Y<​sub>​c,​m</​sub>//​ are the observed and calculated data respectively at data point //m//, //​Bkg<​sub>​m</​sub>//​ the background at data point //m//, //M// the number of data points, //P// the number of parameters, //​w<​sub>​m</​sub>//​ the weighting given to data point //m// which for counting statistics is given by //​w<​sub>​m</​sub>​=1/////​s//​(//​Y<​sub>​o,​m</​sub>//​)<​sup>​2</​sup>​ where //​s//​(//​Y<​sub>​o,​m</​sub>//​) is the error in //​Y<​sub>​o,​m</​sub>//,​ and //​I<​sub>"​o",​k</​sub>//​ and //​I<​sub>​c,​k</​sub>//​ the "​observed"​ and calculated intensities of the //​k//<​sup>​th</​sup>​ reflection.
  
 | **Criteria of fit** | **Definition** || | **Criteria of fit** | **Definition** ||
-| “R-pattern",​ Rp' (background corrected) | <​sub>​{{Technical_Reference%20V4-1_files:​image026.gif?​139x57}}</​sub>​ | <​sub>​{{Technical_Reference%20V4-1_files:​image028.gif?​152x59}}</​sub>​ | +| “R-pattern",​ Rp' (background corrected) | <​sub>​{{techref_files:​image026.gif?​139x57}}</​sub>​ | <​sub>​{{techref_files:​image028.gif?​152x59}}</​sub>​ | 
-| "​R-weighted pattern",​ Rwp Rwp%%'​(%%background corrected) | <​sub>​{{Technical_Reference%20V4-1_files:​image030.gif?​176x60}}</​sub>​ | <​sub>​{{Technical_Reference%20V4-1_files:​image032.gif?​183x56}}</​sub>​ | +| "​R-weighted pattern",​ Rwp Rwp%%'​(%%background corrected) | <​sub>​{{techref_files:​image030.gif?​176x60}}</​sub>​ | <​sub>​{{techref_files:​image032.gif?​183x56}}</​sub>​ | 
-| "​R-expected",​ Rexp (background corrected) | <​sub>​{{Technical_Reference%20V4-1_files:​image034.gif?​118x52}}</​sub>​ | <​sub>​{{Technical_Reference%20V4-1_files:​image036.gif?​183x54}}</​sub>​ | +| "​R-expected",​ Rexp (background corrected) | <​sub>​{{techref_files:​image034.gif?​118x52}}</​sub>​ | <​sub>​{{techref_files:​image036.gif?​183x54}}</​sub>​ | 
-| "​Goodness of fit", GOF   ​| <​sub>​{{Technical_Reference%20V4-1_files:​image038.gif?​264x54}}</​sub>​ || +| "​Goodness of fit", GOF   | <​sub>​{{techref_files:​image038.gif?​264x54}}</​sub>​ || 
-| "​R-Bragg",​ RB   ​| <​sub>​{{Technical_Reference%20V4-1_files:​image040.gif?​129x52}}</​sub>​ || +| "​R-Bragg",​ RB   | <​sub>​{{techref_files:​image040.gif?​129x52}}</​sub>​ || 
-| "​Durbin-Watson ", d, 1971; Hill & Flack, 1987   ​| <​sub>​{{Technical_Reference%20V4-1_files:​image042.gif?​301x88}}</​sub>​ ||+| "​Durbin-Watson ", d, 1971; Hill & Flack, 1987   | <​sub>​{{techref_files:​image042.gif?​301x88}}</​sub>​ ||
  
-====== ​6             Peak Generation and "​peak_type"​ ======+====== ​6             ​Peak Generation and "​peak_type"​ ======
  
 A number of analytical profile shapes can be convoluted with predefined or User defined functions. Analytical convolutions are used where possible. A number of analytical profile shapes can be convoluted with predefined or User defined functions. Analytical convolutions are used where possible.
Line 758: Line 757:
 Numerical convolution is important in regards to laboratory powder diffraction data as many of the instrument aberration functions cannot be convoluted analytically. The process of convolution from a fundamental parameters perspective is an approximation whereby second order effects and higher are typically neglected. These approximations are valid except for extreme cases that are unlikely to exist in practice, for example, axial divergence with Soller slits acceptance angles that are greater than about 12 degrees. Numerical convolution is important in regards to laboratory powder diffraction data as many of the instrument aberration functions cannot be convoluted analytically. The process of convolution from a fundamental parameters perspective is an approximation whereby second order effects and higher are typically neglected. These approximations are valid except for extreme cases that are unlikely to exist in practice, for example, axial divergence with Soller slits acceptance angles that are greater than about 12 degrees.
  
-===== 6.1        ​Source emission profiles =====+===== 6.1        ​Source emission profiles =====
  
 Generation of the [[#​k032|emission profile]] is the first step in peak generation. It comprises [[#k032|EM lines]], EM<​sub>​k</​sub>,​ each of which is a Voigt comprising the parameters //la//, //lo//, //lh// and //lg//. The reserved parameter name Lam is assigned the //lo// value of the EM<​sub>​k</​sub>​ line with the largest //​la//​value,​ this EM<​sub>​k</​sub>​ will be called EMREF. It is used to calculate d-spacings. The interpretation of EM data is dependent on //​[[#​k047|peak_type]]//​. For all peak types the position 2q<​sub>​k</​sub>​ calculated for a particular emission line for a particular Bragg position of 2q is determined as follows: Generation of the [[#​k032|emission profile]] is the first step in peak generation. It comprises [[#k032|EM lines]], EM<​sub>​k</​sub>,​ each of which is a Voigt comprising the parameters //la//, //lo//, //lh// and //lg//. The reserved parameter name Lam is assigned the //lo// value of the EM<​sub>​k</​sub>​ line with the largest //​la//​value,​ this EM<​sub>​k</​sub>​ will be called EMREF. It is used to calculate d-spacings. The interpretation of EM data is dependent on //​[[#​k047|peak_type]]//​. For all peak types the position 2q<​sub>​k</​sub>​ calculated for a particular emission line for a particular Bragg position of 2q is determined as follows:
  
-<​sub>​{{Technical_Reference%20V4-1_files:​image044.gif?​200x48}}</​sub>​where+<​sub>​{{techref_files:​image044.gif?​200x48}}</​sub>​where
  
-<​sub>​{{Technical_Reference%20V4-1_files:​image046.gif?​164x23}}</​sub>​2q for //xo_Is// phases corresponds to the //xo// parameter. 2q for //d_Is// phases is given by the Bragg equation 2q=ArcSin(Lam%%/​(%%//​d//​)) ​360/Pi where //d// corresponds to the value of the //d// parameter. 2q values for //str// and //hkl_Is// phases are calculated from the lattice parameters.+<​sub>​{{techref_files:​image046.gif?​164x23}}</​sub>​2q for //xo_Is// phases corresponds to the //xo// parameter. 2q for //d_Is// phases is given by the Bragg equation 2q=ArcSin(Lam%%/​(%%//d//)) 360/Pi where //d// corresponds to the value of the //d// parameter. 2q values for //str// and //hkl_Is// phases are calculated from the lattice parameters.
  
 The FWHW<​sub>​k</​sub>​ in °2q for an EM<​sub>​k</​sub>​ line is determined from the relations provided in Table 6‑1. The FWHW<​sub>​k</​sub>​ in °2q for an EM<​sub>​k</​sub>​ line is determined from the relations provided in Table 6‑1.
Line 776: Line 775:
 The x-axis extent (x1, x2) to which an EM line is calculated is determined by: The x-axis extent (x1, x2) to which an EM line is calculated is determined by:
  
-<​sub>​{{Technical_Reference%20V4-1_files:​image048.gif?​325x44}}</​sub>​where EMREF corresponds to the emission profile with the largest //la// value. The default for //​ymin_on_ymax//​ is 0.001. Emission profile data have been taken from Hölzer et al. (1997) and are stored in *.LAM files in the LAM directory.+<​sub>​{{techref_files:​image048.gif?​325x44}}</​sub>​where EMREF corresponds to the emission profile with the largest //la// value. The default for //​ymin_on_ymax//​ is 0.001. Emission profile data have been taken from Hölzer et al. (1997) and are stored in *.LAM files in the LAM directory.
  
- + 
  
-| **Table 6‑1** ​ FWHW<​sub>​k</​sub>​ in °2q for an EM<​sub>​k</​sub>​ line for the different peak types. || +| **Table 6‑1** ​ FWHW<​sub>​k</​sub>​ in °2q for an EM<​sub>​k</​sub>​ line for the different peak types. || 
-| FP peak type | <​sub>​{{Technical_Reference%20V4-1_files:​image050.gif?​233x48}}</​sub>​ | +| FP peak type | <​sub>​{{techref_files:​image050.gif?​233x48}}</​sub>​ | 
-| PV peak type | <​sub>​{{Technical_Reference%20V4-1_files:​image052.gif?​217x47}}</​sub>​ | +| PV peak type | <​sub>​{{techref_files:​image052.gif?​217x47}}</​sub>​ | 
-| SPVII peak type | <​sub>​{{Technical_Reference%20V4-1_files:​image054.gif?​171x47}}</​sub>, ​    ​<​sub>​{{Technical_Reference%20V4-1_files:​image056.gif?​175x47}}</​sub>​ | +| SPVII peak type | <​sub>​{{techref_files:​image054.gif?​171x47}}</​sub>,​     ​<​sub>​{{techref_files:​image056.gif?​175x47}}</​sub>​ | 
-| SPV peak type | <​sub>​{{Technical_Reference%20V4-1_files:​image058.gif?​205x47}}</​sub>, ​    ​<​sub>​{{Technical_Reference%20V4-1_files:​image060.gif?​208x47}}</​sub>​ |+| SPV peak type | <​sub>​{{techref_files:​image058.gif?​205x47}}</​sub>,​     ​<​sub>​{{techref_files:​image060.gif?​208x47}}</​sub>​ |
  
- + 
  
-===== 6.2        ​Peak generation and peak types =====+===== 6.2        ​Peak generation and peak types =====
  
 Phase peaks P are generated as follows: Phase peaks P are generated as follows:
Line 794: Line 793:
 | P = Get(//​scale//​) Get(all_scale_pks) //I// EM(//​peak_type//​) Ä Convolutions | (6‑1) | | P = Get(//​scale//​) Get(all_scale_pks) //I// EM(//​peak_type//​) Ä Convolutions | (6‑1) |
  
-where the emission profile (EM) is first generated with emission profile lines of type //​peak_type//;​ the symbol Ä denotes convolution. Peaks are then convoluted with any defined convolutions,​ multiplied by the //scale// parameter, multiplied by any defined //​scale_pks//,​ and then multiplied by an intensity parameter. For //xo_Is//, //d_Is// and //hkl_Is// phases the intensity is given by the //I// parameter. For //str// phases it corresponds to the square of the structure factor F<​sup>​2</​sup>​(hkl). Convolutions are normalized and do not change the area under a peak except for the //​capillary_diameter_mm// ​and  ​//​lpsd_th2_angular_range_degrees//​ convolutions. The area under the emission profile is determined by the sum of the //la// parameters; typically they add up to 1.+where the emission profile (EM) is first generated with emission profile lines of type //​peak_type//;​ the symbol Ä denotes convolution. Peaks are then convoluted with any defined convolutions,​ multiplied by the //scale// parameter, multiplied by any defined //​scale_pks//,​ and then multiplied by an intensity parameter. For //xo_Is//, //d_Is// and //hkl_Is// phases the intensity is given by the //I// parameter. For //str// phases it corresponds to the square of the structure factor F<​sup>​2</​sup>​(hkl). Convolutions are normalized and do not change the area under a peak except for the //​capillary_diameter_mm// ​and  //​lpsd_th2_angular_range_degrees//​ convolutions. The area under the emission profile is determined by the sum of the //la// parameters; typically they add up to 1.
  
 The definitions of the pseudo-Voigt and PearsonVII functions are provided in Table 6‑2 (symmetric functions) and Table 6‑3 (split functions). The following terms are used: The definitions of the pseudo-Voigt and PearsonVII functions are provided in Table 6‑2 (symmetric functions) and Table 6‑3 (split functions). The following terms are used:
Line 800: Line 799:
 __Symmetric functions__ __Symmetric functions__
  
-x                            ​(2q-2q<​sub>​k</​sub>​) where 2q<​sub>​k</​sub>​ is the position of the k<​sup>​th</​sup>​ reflection+x                            ​(2q-2q<​sub>​k</​sub>​) where 2q<​sub>​k</​sub>​ is the position of the k<​sup>​th</​sup>​ reflection
  
-fwhm                       full width at half maximum+fwhm                       ​full width at half maximum
  
-h                            ​PV mixing parameter+h                            ​PV mixing parameter
  
 __Asymmetric functions__ __Asymmetric functions__
  
-//fwhm1//, //​fwhm2// ​         fwhm for the left and right composite function+//fwhm1//, //fwhm2//          ​fwhm for the left and right composite function
  
-//m1//, //​m2// ​                  ​Exponents for the composite functions+//m1//, //m2//                   ​Exponents for the composite functions
  
-//​h////​1//, ​ //​h2// ​                   PV mixing parameters for the composite functions+//​h////​1//, ​ //h2//                    ​PV mixing parameters for the composite functions
  
- + 
  
-**Table 6‑2** ​ ​Unit ​area peak types for symmetric functions.+**Table 6‑2** ​ Unit ​area peak types for symmetric functions.
  
 | **Profile Function** | **Definition** | | **Profile Function** | **Definition** |
-| Gaussian, G<​sub>​UA</​sub>​(x) | <​sub>​{{Technical_Reference%20V4-1_files:​image062.gif?​209x51}}</​sub>​ where <​sub>​{{Technical_Reference%20V4-1_files:​image064.gif?​119x27}}</​sub>​ , <​sub>​{{Technical_Reference%20V4-1_files:​image066.gif?​82x23}}</​sub>​ | +| Gaussian, G<​sub>​UA</​sub>​(x) | <​sub>​{{techref_files:​image062.gif?​209x51}}</​sub>​ where <​sub>​{{techref_files:​image064.gif?​119x27}}</​sub>​ , <​sub>​{{techref_files:​image066.gif?​82x23}}</​sub>​ | 
-| Lorentzian, L<​sub>​UA</​sub>​(x) | <​sub>​{{Technical_Reference%20V4-1_files:​image068.gif?​193x51}}</​sub>​ where <​sub>​{{Technical_Reference%20V4-1_files:​image070.gif?​56x23}}</​sub>,​ <​sub>​{{Technical_Reference%20V4-1_files:​image072.gif?​41x23}}</​sub>​ | +| Lorentzian, L<​sub>​UA</​sub>​(x) | <​sub>​{{techref_files:​image068.gif?​193x51}}</​sub>​ where <​sub>​{{techref_files:​image070.gif?​56x23}}</​sub>,​ <​sub>​{{techref_files:​image072.gif?​41x23}}</​sub>​ | 
-| PseudoVoigt,​ PV<​sub>​UA</​sub>​(x) | <​sub>​{{Technical_Reference%20V4-1_files:​image074.gif?​189x24}}</​sub>​ |+| PseudoVoigt,​ PV<​sub>​UA</​sub>​(x) | <​sub>​{{techref_files:​image074.gif?​189x24}}</​sub>​ |
  
- + 
  
-**Table 6‑3** ​ ​Unit ​area peak types for split functions.+**Table 6‑3** ​ Unit ​area peak types for split functions.
  
 | **Profile Function** | **Definition** | | **Profile Function** | **Definition** |
-| Split PearsonVII, SPVII | <​sub>​{{Technical_Reference%20V4-1_files:​image076.gif?​233x21}}</​sub>​ where       ​<​sub>​{{Technical_Reference%20V4-1_files:​image078.gif?​192x28}}</​sub> ​           <​sub>​{{Technical_Reference%20V4-1_files:​image080.gif?​106x23}}</​sub> ​      ​<​sub>​{{Technical_Reference%20V4-1_files:​image082.gif?​215x28}}</​sub> ​           <​sub>​{{Technical_Reference%20V4-1_files:​image084.gif?​94x23}}</​sub> ​      ​<​sub>​{{Technical_Reference%20V4-1_files:​image086.gif?​248x56}}</​sub> ​      ​<​sub>​{{Technical_Reference%20V4-1_files:​image088.gif?​121x24}}</​sub>,​ <​sub>​{{Technical_Reference%20V4-1_files:​image090.gif?​127x24}}</​sub> ​      ​<​sub>​{{Technical_Reference%20V4-1_files:​image092.gif?​88x21}}</​sub>, ​ <​sub>​{{Technical_Reference%20V4-1_files:​image094.gif?​93x21}}</​sub>, ​ <​sub>​{{Technical_Reference%20V4-1_files:​image096.gif?​105x21}}</​sub>​ | +| Split PearsonVII, SPVII | <​sub>​{{techref_files:​image076.gif?​233x21}}</​sub>​ where       ​<​sub>​{{techref_files:​image078.gif?​192x28}}</​sub>​            ​<​sub>​{{techref_files:​image080.gif?​106x23}}</​sub> ​      ​<​sub>​{{techref_files:​image082.gif?​215x28}}</​sub>​            ​<​sub>​{{techref_files:​image084.gif?​94x23}}</​sub> ​      ​<​sub>​{{techref_files:​image086.gif?​248x56}}</​sub> ​      ​<​sub>​{{techref_files:​image088.gif?​121x24}}</​sub>,​ <​sub>​{{techref_files:​image090.gif?​127x24}}</​sub> ​      ​<​sub>​{{techref_files:​image092.gif?​88x21}}</​sub>,​  <​sub>​{{techref_files:​image094.gif?​93x21}}</​sub>,​  <​sub>​{{techref_files:​image096.gif?​105x21}}</​sub>​ | 
-| Split PseudoVoigt,​ SPV | <​sub>​{{Technical_Reference%20V4-1_files:​image098.gif?​281x23}}</​sub>​ where       ​<​sub>​{{Technical_Reference%20V4-1_files:​image100.gif?​148x23}}</​sub> ​          ​<​sub>​{{Technical_Reference%20V4-1_files:​image080.gif?​106x23}}</​sub> ​      ​<​sub>​{{Technical_Reference%20V4-1_files:​image102.gif?​164x23}}</​sub> ​      ​<​sub>​{{Technical_Reference%20V4-1_files:​image084.gif?​94x23}}</​sub> ​      ​<​sub>​{{Technical_Reference%20V4-1_files:​image104.gif?​267x23}}</​sub> ​      ​<​sub>​{{Technical_Reference%20V4-1_files:​image092.gif?​88x21}}</​sub>, ​ <​sub>​{{Technical_Reference%20V4-1_files:​image094.gif?​93x21}}</​sub>, ​    ​<​sub>​{{Technical_Reference%20V4-1_files:​image096.gif?​105x21}}</​sub>​ |+| Split PseudoVoigt,​ SPV | <​sub>​{{techref_files:​image098.gif?​281x23}}</​sub>​ where       ​<​sub>​{{techref_files:​image100.gif?​148x23}}</​sub>​           ​<​sub>​{{techref_files:​image080.gif?​106x23}}</​sub> ​      ​<​sub>​{{techref_files:​image102.gif?​164x23}}</​sub>​       ​<​sub>​{{techref_files:​image084.gif?​94x23}}</​sub> ​      ​<​sub>​{{techref_files:​image104.gif?​267x23}}</​sub> ​      ​<​sub>​{{techref_files:​image092.gif?​88x21}}</​sub>,​  <​sub>​{{techref_files:​image094.gif?​93x21}}</​sub>, ​    ​<​sub>​{{techref_files:​image096.gif?​105x21}}</​sub>​ |
  
- + 
  
 Lorentzian and Gaussian convolutions using //​lor_fwhm//​ and //​gauss_fwhm//​ equations are analytically convoluted with FP and PV peak types and numerically convoluted with the SPVII and SPV peak types. These numerical convolutions have a high degree of accuracy as they comprise analytical Lorentzian and Gaussian functions convoluted with straight line segments. Lorentzian and Gaussian convolutions using //​lor_fwhm//​ and //​gauss_fwhm//​ equations are analytically convoluted with FP and PV peak types and numerically convoluted with the SPVII and SPV peak types. These numerical convolutions have a high degree of accuracy as they comprise analytical Lorentzian and Gaussian functions convoluted with straight line segments.
Line 842: Line 841:
 | TCHZ_Peak_Type:​ The modified Thompson-Cox-Hastings pseudo-Voigt "​TCHZ"​ is defined as (e.g. Young, 1993), see example ALVO4_TCH.INP,​ h = 1.36603 q - 0.47719 q<​sup>​2</​sup>​ + 0.1116 q<​sup>​3</​sup>​ where q = G<​sub>​L</​sub>​ / G G = (G<​sub>​G</​sub><​sup>​5</​sup>​ + AG<​sub>​G</​sub><​sup>​4</​sup>​G<​sub>​L</​sub>​ + BG<​sub>​G</​sub><​sup>​3</​sup>​G<​sub>​L</​sub><​sup>​2</​sup>​ + CG<​sub>​G</​sub><​sup>​2</​sup>​G<​sub>​L</​sub><​sup>​3</​sup>​ + DG<​sub>​G</​sub>​G<​sub>​L</​sub><​sup>​4</​sup>​ + G<​sub>​L</​sub><​sup>​5</​sup>​)<​sup>​0.2</​sup>​ = fwhm A = 2.69269, B = 2.42843, C = 4.47163, D = 0.07842 G<​sub>​G</​sub>​ = (U tan<​sup>​2</​sup>​q + V tanq + W + Z / cos<​sup>​2</​sup>​q)<​sup>​0.5</​sup>​ G<​sub>​L</​sub>​ = X tanq +Y / cosq with U, V, W, X, Y, Z as refineable parameters. || | TCHZ_Peak_Type:​ The modified Thompson-Cox-Hastings pseudo-Voigt "​TCHZ"​ is defined as (e.g. Young, 1993), see example ALVO4_TCH.INP,​ h = 1.36603 q - 0.47719 q<​sup>​2</​sup>​ + 0.1116 q<​sup>​3</​sup>​ where q = G<​sub>​L</​sub>​ / G G = (G<​sub>​G</​sub><​sup>​5</​sup>​ + AG<​sub>​G</​sub><​sup>​4</​sup>​G<​sub>​L</​sub>​ + BG<​sub>​G</​sub><​sup>​3</​sup>​G<​sub>​L</​sub><​sup>​2</​sup>​ + CG<​sub>​G</​sub><​sup>​2</​sup>​G<​sub>​L</​sub><​sup>​3</​sup>​ + DG<​sub>​G</​sub>​G<​sub>​L</​sub><​sup>​4</​sup>​ + G<​sub>​L</​sub><​sup>​5</​sup>​)<​sup>​0.2</​sup>​ = fwhm A = 2.69269, B = 2.42843, C = 4.47163, D = 0.07842 G<​sub>​G</​sub>​ = (U tan<​sup>​2</​sup>​q + V tanq + W + Z / cos<​sup>​2</​sup>​q)<​sup>​0.5</​sup>​ G<​sub>​L</​sub>​ = X tanq +Y / cosq with U, V, W, X, Y, Z as refineable parameters. ||
  
-===== 6.3        ​Convolution and the peak generation stack =====+===== 6.3        ​Convolution and the peak generation stack =====
  
 The emission profile of a peak P0 of a certain peak type (ie. FP, PV etc…) is first calculated and placed onto a ‘Peak calculation stack’. P0 analytically includes //​lor_fwhm//​ and //​gauss_fwhm//​ convolutions for FP and PV peak types and additionally one //hat// convolution if defined; the //hat// convolution is included analytically only if its corresponding //​num_hats//​ has a value of 1 and if it does not take part in stack operations. Further defined convolutions are convoluted with the top member of the stack. The last convolution should leave the stack with one entry representing the final peak shape. The following keywords allow for manipulation of the Peak calculation stack: The emission profile of a peak P0 of a certain peak type (ie. FP, PV etc…) is first calculated and placed onto a ‘Peak calculation stack’. P0 analytically includes //​lor_fwhm//​ and //​gauss_fwhm//​ convolutions for FP and PV peak types and additionally one //hat// convolution if defined; the //hat// convolution is included analytically only if its corresponding //​num_hats//​ has a value of 1 and if it does not take part in stack operations. Further defined convolutions are convoluted with the top member of the stack. The last convolution should leave the stack with one entry representing the final peak shape. The following keywords allow for manipulation of the Peak calculation stack:
Line 858: Line 857:
 push_peak push_peak
  
-   prm a0 481.71904 del = 0.05 Val + 2;+   prm a0 481.71904 del = 0.05 Val + 2;
  
-   prm a1 -241.87060 del = 0.05 Val + 2;+   prm a1 -241.87060 del = 0.05 Val + 2;
  
-   exp_conv_const = a0 + a1 / D_spacing;+   exp_conv_const = a0 + a1 / D_spacing;
  
 bring_2nd_peak_to_top bring_2nd_peak_to_top
  
-   prm b0 -3.62905 del = 0.05 Val + 2;+   prm b0 -3.62905 del = 0.05 Val + 2;
  
-   prm b1 6.44536 del = 0.05 Val + 2;+   prm b1 6.44536 del = 0.05 Val + 2;
  
-   exp_conv_const = b0 + b1 / D_spacing^4;​+   exp_conv_const = b0 + b1 / D_spacing^4;​
  
 add_pop_1st_2nd_peak add_pop_1st_2nd_peak
Line 888: Line 887:
 and the next convolution results in: and the next convolution results in:
  
-Stack = P0 Ä exp_conv_const, ​P0  ​Ä exp_conv_const+Stack = P0 Ä exp_conv_const, ​P0  Ä exp_conv_const
  
 Thus the stack contains two peaks convoluted with exponentials. The last statement //​add_pop_1st_2nd_peak//​ produces: Thus the stack contains two peaks convoluted with exponentials. The last statement //​add_pop_1st_2nd_peak//​ produces:
  
-Stack = P0 Ä exp_conv_const + P0  ​Ä exp_conv_cons+Stack = P0 Ä exp_conv_const + P0  Ä exp_conv_cons
  
-===== 6.4        ​Speed / Accuracy and peak_buffer_step =====+===== 6.4        ​Speed / Accuracy and peak_buffer_step =====
  
 For computational efficiency phase peaks are calculated at predefined 2qintervals in a “peaks buffer“. In between peaks are determined by stretching and interpolating. Use of the peaks buffer dramatically reduces the number of peaks actually calculated. Typically no more than 50 to 100 peaks are necessary in order to accurately describe peaks across a whole diffraction pattern. The following keywords affect the accuracy of phase peaks: For computational efficiency phase peaks are calculated at predefined 2qintervals in a “peaks buffer“. In between peaks are determined by stretching and interpolating. Use of the peaks buffer dramatically reduces the number of peaks actually calculated. Typically no more than 50 to 100 peaks are necessary in order to accurately describe peaks across a whole diffraction pattern. The following keywords affect the accuracy of phase peaks:
Line 908: Line 907:
 Default values for these are typically adequate. //​peak_buffer_step//​ determines the maximum x-axis spacing between peaks in the peaks buffer, it has a default value of 500*Peak_Calculation_Step. A value of zero will force the calculation of a new peak in the peaks buffer for each peak of the phase. Note that peaks are not calculated for x-axis regions that are void of phase peaks. Default values for these are typically adequate. //​peak_buffer_step//​ determines the maximum x-axis spacing between peaks in the peaks buffer, it has a default value of 500*Peak_Calculation_Step. A value of zero will force the calculation of a new peak in the peaks buffer for each peak of the phase. Note that peaks are not calculated for x-axis regions that are void of phase peaks.
  
-//​convolution_step//​ defines an integer corresponding to the number of calculated data points per measurement data point used to calculate the peaks in the peaks buffer, see //​[[#​k094|x_calculation_step]]//​ as well.  Increasing the value for //​convolution_step//​ improves accuracy for data with large step sizes or for peaks that have less than 7 data points across the FWHM.+//​convolution_step//​ defines an integer corresponding to the number of calculated data points per measurement data point used to calculate the peaks in the peaks buffer, see //​[[#​k094|x_calculation_step]]//​ as well.  Increasing the value for //​convolution_step//​ improves accuracy for data with large step sizes or for peaks that have less than 7 data points across the FWHM.
  
 //​ymin_on_ymax//​ determines the x-axis extents of a peak (see also section 6.1). //​ymin_on_ymax//​ determines the x-axis extents of a peak (see also section 6.1).
Line 916: Line 915:
 Small values for //​aberration_range_change_allowed//​ reduces the spacing between peaks in the peaks buffer and subsequently increase the number of peaks in the peaks buffer. Small values for //​aberration_range_change_allowed//​ reduces the spacing between peaks in the peaks buffer and subsequently increase the number of peaks in the peaks buffer.
  
- + 
  
 **Table 6‑4** Default values for //​aberration_range_change_allowed//​ for the following peak type parameters and convolutions. **Table 6‑4** Default values for //​aberration_range_change_allowed//​ for the following peak type parameters and convolutions.
Line 928: Line 927:
 | lor_fwhm and gauss_fwhm | [[#​k094|Peak_Calculation_Step]] for all lor_fwhm and gauss_fwhm defined. | | lor_fwhm and gauss_fwhm | [[#​k094|Peak_Calculation_Step]] for all lor_fwhm and gauss_fwhm defined. |
  
-====== ​7             Miscellanous ======+====== ​7             ​Miscellanous ======
  
-===== 7.1        ​Instrument and sample convolutions =====+===== 7.1        ​Instrument and sample convolutions =====
  
 Diffractometer instrument and sample aberration functions used for peak profile synthesis are generated from generic convolutions. For example, the “simple” axial divergence model is described using the generic convolution //​circles_conv//​ as defined in the Simple_Axial_Model macro. Table 7‑1 lists some of the instrument convolutions supported. In addition the full axial divergence model of Cheary & Coelho (1998a, 1998b) is supported. Diffractometer instrument and sample aberration functions used for peak profile synthesis are generated from generic convolutions. For example, the “simple” axial divergence model is described using the generic convolution //​circles_conv//​ as defined in the Simple_Axial_Model macro. Table 7‑1 lists some of the instrument convolutions supported. In addition the full axial divergence model of Cheary & Coelho (1998a, 1998b) is supported.
  
-**Table 7‑1** Instrument and sample aberration functions in terms of <​sub>​{{Technical_Reference%20V4-1_files:​image108.gif?​74x24}}</​sub>, ​ ​where ​2q is the measured angle and 2q<​sub>​k</​sub>​ the Bragg angle. R<​sub>​P</​sub>​ and R<​sub>​S</​sub>​ correspond to the primary and secondary radius of the diffractometer respectively.+**Table 7‑1** Instrument and sample aberration functions in terms of <​sub>​{{techref_files:​image108.gif?​74x24}}</​sub>, ​ where ​2q is the measured angle and 2q<​sub>​k</​sub>​ the Bragg angle. R<​sub>​P</​sub>​ and R<​sub>​S</​sub>​ correspond to the primary and secondary radius of the diffractometer respectively.
  
 | **Aberrations** | **Name** | **Aberration function Fn(****e)** | | **Aberrations** | **Name** | **Aberration function Fn(****e)** |
 | **Instrument** ||| | **Instrument** |||
-| Equitorial divergence (fixed divergence slits) | EDFA [°] | <​sub>​{{Technical_Reference%20V4-1_files:​image110.gif?​115x27}}</​sub>​ for <​sub>​{{Technical_Reference%20V4-1_files:​image112.gif?​37x17}}</​sub> ​to <​sub>​{{Technical_Reference%20V4-1_files:​image114.gif?​193x25}}</​sub> ​          ​[°2q] | +| Equitorial divergence (fixed divergence slits) | EDFA [°] | <​sub>​{{techref_files:​image110.gif?​115x27}}</​sub>​ for <​sub>​{{techref_files:​image112.gif?​37x17}}</​sub>​ to <​sub>​{{techref_files:​image114.gif?​193x25}}</​sub>​           ​[°2q] | 
-| Equitorial divergence (variable divergence slits) | EDFL [mm] | <​sub>​{{Technical_Reference%20V4-1_files:​image110.gif?​115x27}}</​sub>​ for <​sub>​{{Technical_Reference%20V4-1_files:​image112.gif?​37x17}}</​sub> ​to<​sub>​{{Technical_Reference%20V4-1_files:​image116.gif?​244x27}}</​sub> ​                                                                             [°2q] | +| Equitorial divergence (variable divergence slits) | EDFL [mm] | <​sub>​{{techref_files:​image110.gif?​115x27}}</​sub>​ for <​sub>​{{techref_files:​image112.gif?​37x17}}</​sub>​ to<​sub>​{{techref_files:​image116.gif?​244x27}}</​sub>​                                                                              ​[°2q] | 
-| Size of source in the equitorial plane | TA  ​[mm] | <​sub>​{{Technical_Reference%20V4-1_files:​image118.gif?​41x23}}</​sub>​ = Hat Shape, for <​sub>​{{Technical_Reference%20V4-1_files:​image120.gif?​119x24}}</​sub>​ where <​sub>​{{Technical_Reference%20V4-1_files:​image122.gif?​131x24}}</​sub> ​                                   [°2q] | +| Size of source in the equitorial plane | TA  [mm] | <​sub>​{{techref_files:​image118.gif?​41x23}}</​sub>​ = Hat Shape, for <​sub>​{{techref_files:​image120.gif?​119x24}}</​sub>​ where <​sub>​{{techref_files:​image122.gif?​131x24}}</​sub>​                                    ​[°2q] | 
-| Specimen tilt; thickness of sample surface as projected onto the equitorial plane | ST  ​[mm] | <​sub>​{{Technical_Reference%20V4-1_files:​image118.gif?​41x23}}</​sub>​ = Hat Shape, for <​sub>​{{Technical_Reference%20V4-1_files:​image120.gif?​119x24}}</​sub>​ where <​sub>​{{Technical_Reference%20V4-1_files:​image124.gif?​184x24}} </​sub> ​                     [°2q] | +| Specimen tilt; thickness of sample surface as projected onto the equitorial plane | ST  [mm] | <​sub>​{{techref_files:​image118.gif?​41x23}}</​sub>​ = Hat Shape, for <​sub>​{{techref_files:​image120.gif?​119x24}}</​sub>​ where <​sub>​{{techref_files:​image124.gif?​184x24}} </​sub>​                      ​[°2q] | 
-| Receiving slit length in the axial plane | SL  ​[mm] | <​sub>​{{Technical_Reference%20V4-1_files:​image126.gif?​181x27}}</​sub>​ for <​sub>​{{Technical_Reference%20V4-1_files:​image128.gif?​37x18}}</​sub> ​to <​sub>​{{Technical_Reference%20V4-1_files:​image130.gif?​200x27}}</​sub> ​         [°2q] | +| Receiving slit length in the axial plane | SL  [mm] | <​sub>​{{techref_files:​image126.gif?​181x27}}</​sub>​ for <​sub>​{{techref_files:​image128.gif?​37x18}}</​sub>​ to <​sub>​{{techref_files:​image130.gif?​200x27}}</​sub>​          ​[°2q] | 
-| Width of the receiving slit in the equitorial plane | SW  ​[mm] | <​sub>​{{Technical_Reference%20V4-1_files:​image118.gif?​41x23}}</​sub>​ = Hat Shape, for <​sub>​{{Technical_Reference%20V4-1_files:​image120.gif?​119x24}}</​sub>​ where <​sub>​{{Technical_Reference%20V4-1_files:​image132.gif?​139x24}} </​sub> ​                                ​[°2q] |+| Width of the receiving slit in the equitorial plane | SW  [mm] | <​sub>​{{techref_files:​image118.gif?​41x23}}</​sub>​ = Hat Shape, for <​sub>​{{techref_files:​image120.gif?​119x24}}</​sub>​ where <​sub>​{{techref_files:​image132.gif?​139x24}} </​sub>​                                 ​[°2q] |
 | **Sample** ||| | **Sample** |||
-| Linear absorption coefficient | AB  ​[cm<​sup>​-1</​sup>​] | <​sub>​{{Technical_Reference%20V4-1_files:​image134.gif?​159x23}}</​sub>​ for <​sub>​{{Technical_Reference%20V4-1_files:​image136.gif?​45x18}}</​sub> ​and <​sub>​{{Technical_Reference%20V4-1_files:​image138.gif?​181x24}}</​sub> ​         [°2q] |+| Linear absorption coefficient | AB  [cm<​sup>​-1</​sup>​] | <​sub>​{{techref_files:​image134.gif?​159x23}}</​sub>​ for <​sub>​{{techref_files:​image136.gif?​45x18}}</​sub>​ and <​sub>​{{techref_files:​image138.gif?​181x24}}</​sub>​          ​[°2q] |
  
- + 
  
-===== 7.2        ​Microstructure convolutions =====+===== 7.2        ​Microstructure convolutions =====
  
 The Double-Voigt approach (e.g. Balzar, 1999) is supported for modeling microstructure effects. Crystallite size and strain comprise Lorentzian and Gaussian component convolutions varying in 2q as a function of 1/cos(q) and tan(q) respectively. The Double-Voigt approach (e.g. Balzar, 1999) is supported for modeling microstructure effects. Crystallite size and strain comprise Lorentzian and Gaussian component convolutions varying in 2q as a function of 1/cos(q) and tan(q) respectively.
  
-==== 7.2.1              ​Preliminary equations ====+==== 7.2.1              ​Preliminary equations ====
  
 The following preliminary equations are based on the unit area Gaussian, G<​sub>​UA</​sub>​(x),​ and Lorentzian, L<​sub>​UA</​sub>​(x),​ and pseudo-Voigt PV<​sub>​UA</​sub>​(x) functions as given in Table 6‑2. The following preliminary equations are based on the unit area Gaussian, G<​sub>​UA</​sub>​(x),​ and Lorentzian, L<​sub>​UA</​sub>​(x),​ and pseudo-Voigt PV<​sub>​UA</​sub>​(x) functions as given in Table 6‑2.
Line 1011: Line 1010:
 IB(s) = IB(2q) cos(q) / l IB(s) = IB(2q) cos(q) / l
  
-==== 7.2.2              ​Crystallite size and strain ====+==== 7.2.2              ​Crystallite size and strain ====
  
 **__Crystallite Size__** **__Crystallite Size__**
Line 1031: Line 1030:
 fwhm(s) of Lorentzian = (180/p) / CS_L fwhm(s) of Lorentzian = (180/p) / CS_L
  
-b<​sub>​GS</​sub>​(s) =b(s) of Gaussian =  (180/p) / (CS_G g<​sub>​1</​sub>​)+b<​sub>​GS</​sub>​(s) =b(s) of Gaussian =  (180/p) / (CS_G g<​sub>​1</​sub>​)
  
 b<​sub>​CS</​sub>​(s) =b(s) of Lorentzian = (180/p) / (CS_L l<​sub>​1</​sub>​) b<​sub>​CS</​sub>​(s) =b(s) of Lorentzian = (180/p) / (CS_L l<​sub>​1</​sub>​)
Line 1085: Line 1084:
 4 e0 Tan(q) = FWHM of the Voigt from Strain_L and Strain_G 4 e0 Tan(q) = FWHM of the Voigt from Strain_L and Strain_G
  
-                  ​= Voigt_FWHM(Strain_L,​ Strain_G) Tan(q)+                  ​= Voigt_FWHM(Strain_L,​ Strain_G) Tan(q)
  
 or, or,
Line 1093: Line 1092:
 The macro e0_from_Strain calculates e0 using the equation function Voigt_FWHM_GL. The macro e0_from_Strain calculates e0 using the equation function Voigt_FWHM_GL.
  
-===== 7.3        ​Calculation of structure factors =====+===== 7.3        ​Calculation of structure factors =====
  
 The structure factor F for a particular reflection (h k l) is the complex quantity: The structure factor F for a particular reflection (h k l) is the complex quantity:
Line 1101: Line 1100:
 The summation S<​sub>​s</​sub>​ is over the sites of the unit cell and the summation S<​sub>​a</​sub>​ is over the atoms residing on site s. O<​sub>​a</​sub>​ and f<​sub>​o,​a</​sub>​ corresponds to the site occupancy and the atomic scattering factor for atom ’a’ respectively. f<​sub>​a</​sub>'​ and f<​sub>​a</​sub>"​ are the anomalous dispersion coefficients for atom ’a’. A<​sub>​S</​sub>​ and B<​sub>​S</​sub>​ corresponds to the cosine and sine summations for site '​s',​ or: The summation S<​sub>​s</​sub>​ is over the sites of the unit cell and the summation S<​sub>​a</​sub>​ is over the atoms residing on site s. O<​sub>​a</​sub>​ and f<​sub>​o,​a</​sub>​ corresponds to the site occupancy and the atomic scattering factor for atom ’a’ respectively. f<​sub>​a</​sub>'​ and f<​sub>​a</​sub>"​ are the anomalous dispersion coefficients for atom ’a’. A<​sub>​S</​sub>​ and B<​sub>​S</​sub>​ corresponds to the cosine and sine summations for site '​s',​ or:
  
-| A<​sub>​S</​sub>​ = S<​sub>​e</​sub>​ T<​sub>​s,​e</​sub>​ cos(2p **h** **.** **r**<​sub>​e</​sub>​), ​  ​B<​sub>​S</​sub>​ = S<​sub>​e</​sub>​ T<​sub>​s,​e</​sub>​ sin(2p **h** **. r**<​sub>​e</​sub>​) | (7‑2) |+| A<​sub>​S</​sub>​ = S<​sub>​e</​sub>​ T<​sub>​s,​e</​sub>​ cos(2p **h** **.** **r**<​sub>​e</​sub>​),​   B<​sub>​S</​sub>​ = S<​sub>​e</​sub>​ T<​sub>​s,​e</​sub>​ sin(2p **h** **. r**<​sub>​e</​sub>​) | (7‑2) |
  
-where T<​sub>​s,​e</​sub><​sup>​ </​sup>​ is the temperature factor and the summation S<​sub>​e</​sub>​ is over the equivalent positions of site '​s'​ as dictated by the space group. Defining:+where T<​sub>​s,​e</​sub><​sup>​ </​sup>​ is the temperature factor and the summation S<​sub>​e</​sub>​ is over the equivalent positions of site '​s'​ as dictated by the space group. Defining:
  
-| f<​sub>​o,​s</​sub>​ = S<​sub>​a</​sub>​ f<​sub>​o,​a</​sub>​ O<​sub>​a</​sub>, ​  ​f<​sub>​s</​sub><​sup>'</​sup>​ = S<​sub>​a</​sub>​ f<​sub>​a</​sub><​sup>'</​sup>​ O<​sub>​a</​sub>, ​  ​f<​sub>​s</​sub><​sup>"</​sup>​ = S<​sub>​a</​sub>​ f<​sub>​a</​sub><​sup>"</​sup>​ O<​sub>​a</​sub>​ | (7‑3) |+| f<​sub>​o,​s</​sub>​ = S<​sub>​a</​sub>​ f<​sub>​o,​a</​sub>​ O<​sub>​a</​sub>,​   f<​sub>​s</​sub><​sup>'</​sup>​ = S<​sub>​a</​sub>​ f<​sub>​a</​sub><​sup>'</​sup>​ O<​sub>​a</​sub>,​   f<​sub>​s</​sub><​sup>"</​sup>​ = S<​sub>​a</​sub>​ f<​sub>​a</​sub><​sup>"</​sup>​ O<​sub>​a</​sub>​ | (7‑3) |
  
 and separating the real and imaginary components gives: and separating the real and imaginary components gives:
  
-| F = S<​sub>​s</​sub>​ (A<​sub>​s</​sub>​ + i B<​sub>​s</​sub>​) (f<​sub>​o,​s </​sub>​+ f<​sub>​s</​sub><​sup>'</​sup>​+ i f<​sub>​s</​sub><​sup>"</​sup>​) F = S<​sub>​s</​sub>​ (A<​sub>​s</​sub>​ (f<​sub>​o,​s</​sub>​ +f<​sub>​s</​sub><​sup>'</​sup>​) - B<​sub>​s</​sub>​ f<​sub>​s</​sub><​sup>"</​sup>​) + i S<​sub>​s</​sub>​ (A<​sub>​s</​sub>​ f<​sub>​s</​sub><​sup>"</​sup>​ + B<​sub>​s</​sub>​ (f<​sub>​o,​s</​sub>​ + f<​sub>​s</​sub><​sup>'</​sup>​)) or,  F = A + i B | (7‑4) |+| F = S<​sub>​s</​sub>​ (A<​sub>​s</​sub>​ + i B<​sub>​s</​sub>​) (f<​sub>​o,​s </​sub>​+ f<​sub>​s</​sub><​sup>'</​sup>​+ i f<​sub>​s</​sub><​sup>"</​sup>​) F = S<​sub>​s</​sub>​ (A<​sub>​s</​sub>​ (f<​sub>​o,​s</​sub>​ +f<​sub>​s</​sub><​sup>'</​sup>​) - B<​sub>​s</​sub>​ f<​sub>​s</​sub><​sup>"</​sup>​) + i S<​sub>​s</​sub>​ (A<​sub>​s</​sub>​ f<​sub>​s</​sub><​sup>"</​sup>​ + B<​sub>​s</​sub>​ (f<​sub>​o,​s</​sub>​ + f<​sub>​s</​sub><​sup>'</​sup>​)) or,  F = A + i B | (7‑4) |
  
 The observed intensity is proportional to the complex conjugate of the structure factor, or, The observed intensity is proportional to the complex conjugate of the structure factor, or,
Line 1118: Line 1117:
  
 | F<​sup>​2</​sup>​ = A<​sub>​01</​sub><​sup>​2</​sup>​ + B<​sub>​01</​sub><​sup>​2</​sup>​ + A<​sub>​11</​sub><​sup>​2</​sup>​ + B<​sub>​11</​sub><​sup>​2</​sup>​ + 2 B<​sub>​01</​sub>​ A<​sub>​11</​sub>​ - 2 A<​sub>​01</​sub>​ B<​sub>​11</​sub>​ | (7‑5b) | | F<​sup>​2</​sup>​ = A<​sub>​01</​sub><​sup>​2</​sup>​ + B<​sub>​01</​sub><​sup>​2</​sup>​ + A<​sub>​11</​sub><​sup>​2</​sup>​ + B<​sub>​11</​sub><​sup>​2</​sup>​ + 2 B<​sub>​01</​sub>​ A<​sub>​11</​sub>​ - 2 A<​sub>​01</​sub>​ B<​sub>​11</​sub>​ | (7‑5b) |
-| where A<​sub>​01</​sub>​= S<​sub>​s</​sub>​ A<​sub>​s</​sub>​ (f<​sub>​o,​s</​sub>​ + f<​sub>​s</​sub><​sup>'</​sup>​), ​  ​A<​sub>​11</​sub>​ = S<​sub>​s</​sub>​ A<​sub>​s</​sub>​ f<​sub>​s</​sub><​sup>"</​sup> ​           B<​sub>​01</​sub>​= S<​sub>​s</​sub>​ B<​sub>​s</​sub>​ (f<​sub>​o,​s</​sub>​ + f<​sub>​s</​sub><​sup>'</​sup>​), ​  ​B<​sub>​11</​sub>​ = S<​sub>​s</​sub>​ B<​sub>​s</​sub>​ f<​sub>​s</​sub><​sup>"</​sup>​ and A = A<​sub>​01</​sub>​ - B<​sub>​11</​sub>, ​  ​B = B<​sub>​01</​sub>​ + A<​sub>​11</​sub>​ |   ​|+| where A<​sub>​01</​sub>​= S<​sub>​s</​sub>​ A<​sub>​s</​sub>​ (f<​sub>​o,​s</​sub>​ + f<​sub>​s</​sub><​sup>'</​sup>​),​   A<​sub>​11</​sub>​ = S<​sub>​s</​sub>​ A<​sub>​s</​sub>​ f<​sub>​s</​sub><​sup>"</​sup> ​           B<​sub>​01</​sub>​= S<​sub>​s</​sub>​ B<​sub>​s</​sub>​ (f<​sub>​o,​s</​sub>​ + f<​sub>​s</​sub><​sup>'</​sup>​),​   B<​sub>​11</​sub>​ = S<​sub>​s</​sub>​ B<​sub>​s</​sub>​ f<​sub>​s</​sub><​sup>"</​sup>​ and A = A<​sub>​01</​sub>​ - B<​sub>​11</​sub>,​   B = B<​sub>​01</​sub>​ + A<​sub>​11</​sub>​ |   |
  
 Atomic scattering factors used, f<​sub>​o,​a</​sub>,​ are by default those from Atomic scattering factors used, f<​sub>​o,​a</​sub>,​ are by default those from
Line 1136: Line 1135:
 [[http://​www.ccp14.ac.uk/​ccp/​web-mirrors/​neutrons/​n-scatter/​n-lengths/​LIST~1.HTM|http://​www.ccp14.ac.uk/​ccp/​web-mirrors/​neutrons/​n-scatter/​n-lengths/​LIST~1.HTM]] [[http://​www.ccp14.ac.uk/​ccp/​web-mirrors/​neutrons/​n-scatter/​n-lengths/​LIST~1.HTM|http://​www.ccp14.ac.uk/​ccp/​web-mirrors/​neutrons/​n-scatter/​n-lengths/​LIST~1.HTM]]
  
-==== 7.3.1              ​Friedel pairs ====+==== 7.3.1              ​Friedel pairs ====
  
 For centrosymmetric structures the intensities for a Friedel reflection pair are equivalent, or, F<​sup>​2</​sup>​(h k l) = F<​sup>​2</​sup>​(-h-k-l). This holds true regardless of the presence of anomalous scattering and regardless of the atomic species present in the unit cell. This equivalence in F<​sup>​2</​sup>​ is due to the fact that B<​sub>​01</​sub>​ = B<​sub>​11</​sub>​ = 0 and thus: For centrosymmetric structures the intensities for a Friedel reflection pair are equivalent, or, F<​sup>​2</​sup>​(h k l) = F<​sup>​2</​sup>​(-h-k-l). This holds true regardless of the presence of anomalous scattering and regardless of the atomic species present in the unit cell. This equivalence in F<​sup>​2</​sup>​ is due to the fact that B<​sub>​01</​sub>​ = B<​sub>​11</​sub>​ = 0 and thus:
  
-| F = A<​sub>​01</​sub>​ + i A<​sub>​11</​sub> ​  and   F<​sup>​2</​sup>​ = A<​sub>​01</​sub><​sup>​2</​sup>​ + A<​sub>​11</​sub><​sup>​2</​sup>​ | (7‑6) |+| F = A<​sub>​01</​sub>​ + i A<​sub>​11</​sub>​   and   ​F<​sup>​2</​sup>​ = A<​sub>​01</​sub><​sup>​2</​sup>​ + A<​sub>​11</​sub><​sup>​2</​sup>​ | (7‑6) |
  
 For non-centrosymmetric structures and for the case of no anomalous scattering, or for the case where the unit cell comprises a single atomic species, then F<​sup>​2</​sup>​(h k l) = F<​sup>​2</​sup>​(-h-k-l). Or, for a single atomic species we have: For non-centrosymmetric structures and for the case of no anomalous scattering, or for the case where the unit cell comprises a single atomic species, then F<​sup>​2</​sup>​(h k l) = F<​sup>​2</​sup>​(-h-k-l). Or, for a single atomic species we have:
  
-| B<​sub>​01</​sub>​ A<​sub>​11</​sub>​ = (f<​sub>​0</​sub>​ +f<​sup>'</​sup>​) (S<​sub>​S</​sub>​ B<​sub>​S</​sub>​) f<​sup>"</​sup>​ (S<​sub>​S</​sub>​ A<​sub>​S</​sub>​), ​  ​A<​sub>​01</​sub>​ B<​sub>​11</​sub>​ = (f<​sub>​0</​sub>​ +f<​sup>'</​sup>​) (S<​sub>​S</​sub>​ A<​sub>​S</​sub>​) f<​sup>"</​sup>​ (S<​sub>​S</​sub>​ B<​sub>​S</​sub>​) or, B<​sub>​01</​sub>​ A<​sub>​11</​sub>​ = A<​sub>​01</​sub>​ B<​sub>​11</​sub>​ | (7‑7) |+| B<​sub>​01</​sub>​ A<​sub>​11</​sub>​ = (f<​sub>​0</​sub>​ +f<​sup>'</​sup>​) (S<​sub>​S</​sub>​ B<​sub>​S</​sub>​) f<​sup>"</​sup>​ (S<​sub>​S</​sub>​ A<​sub>​S</​sub>​),​   A<​sub>​01</​sub>​ B<​sub>​11</​sub>​ = (f<​sub>​0</​sub>​ +f<​sup>'</​sup>​) (S<​sub>​S</​sub>​ A<​sub>​S</​sub>​) f<​sup>"</​sup>​ (S<​sub>​S</​sub>​ B<​sub>​S</​sub>​) or, B<​sub>​01</​sub>​ A<​sub>​11</​sub>​ = A<​sub>​01</​sub>​ B<​sub>​11</​sub>​ | (7‑7) |
  
 and thus from cancellation in Eq. (7‑5b) we get and thus from cancellation in Eq. (7‑5b) we get
Line 1152: Line 1151:
 For non-centrosymmetric structures and for the case of anomalous scattering and for a structure comprising more than one atomic species then F<​sup>​2</​sup>​(**h**) ¹ F<​sup>​2</​sup>​(-**h**). For non-centrosymmetric structures and for the case of anomalous scattering and for a structure comprising more than one atomic species then F<​sup>​2</​sup>​(**h**) ¹ F<​sup>​2</​sup>​(-**h**).
  
-==== 7.3.2              ​Powder data ====+==== 7.3.2              ​Powder data ====
  
 Friedel pairs are merged for powder diffraction data meaning that the multiplicities as determined by the hkl generator includes the reflections (h k l) and (-h -k -l); this merging of Friedel pairs improves computational efficiency. Eq. (7‑5b) gives the correct intensity for unmerged Friedel pairs and thus it cannot be used for merged Friedel pairs. Using the fact that: Friedel pairs are merged for powder diffraction data meaning that the multiplicities as determined by the hkl generator includes the reflections (h k l) and (-h -k -l); this merging of Friedel pairs improves computational efficiency. Eq. (7‑5b) gives the correct intensity for unmerged Friedel pairs and thus it cannot be used for merged Friedel pairs. Using the fact that:
  
-| A<​sub>​01</​sub>​(**h**) = A<​sub>​01</​sub>​(-**h**), ​   A<​sub>​11</​sub>​(**h**) = A<​sub>​11</​sub>​(-**h**) B<​sub>​01</​sub>​(**h**) = B<​sub>​01</​sub>​(-**h**), ​ B<​sub>​11</​sub>​(**h**) = B<​sub>​11</​sub>​(-**h**) | (7‑9) |+| A<​sub>​01</​sub>​(**h**) = A<​sub>​01</​sub>​(-**h**),​    A<​sub>​11</​sub>​(**h**) = A<​sub>​11</​sub>​(-**h**) B<​sub>​01</​sub>​(**h**) = B<​sub>​01</​sub>​(-**h**),​  B<​sub>​11</​sub>​(**h**) = B<​sub>​11</​sub>​(-**h**) | (7‑9) |
  
 then F<​sup>​2</​sup>​ from Eq. (7‑5b) in terms of B<​sub>​01</​sub>​(**h**) and B<​sub>​11</​sub>​(**h**) evaluates to: then F<​sup>​2</​sup>​ from Eq. (7‑5b) in terms of B<​sub>​01</​sub>​(**h**) and B<​sub>​11</​sub>​(**h**) evaluates to:
  
-| F<​sup>​2</​sup>​(**h**) ​ = Q<​sub>​1</​sub>​ + Q<​sub>​2</​sub>​ F<​sup>​2</​sup>​(-**h**) ​ = Q<​sub>​1</​sub>​ -- Q<​sub>​2</​sub>​ where Q<​sub>​1</​sub>​ = A<​sub>​01</​sub><​sup>​2</​sup>​ + B<​sub>​01</​sub><​sup>​2</​sup>​ + A<​sub>​11</​sub><​sup>​2</​sup>​ + B<​sub>​11</​sub><​sup>​2</​sup>​ and Q<​sub>​2</​sub>​ = 2 (B<​sub>​01</​sub>​ A<​sub>​11</​sub>​ --  A<​sub>​01</​sub>​ B<​sub>​11</​sub>​) | (7‑10) |+| F<​sup>​2</​sup>​(**h**)  = Q<​sub>​1</​sub>​ + Q<​sub>​2</​sub>​ F<​sup>​2</​sup>​(-**h**)  = Q<​sub>​1</​sub>​ -- Q<​sub>​2</​sub>​ where Q<​sub>​1</​sub>​ = A<​sub>​01</​sub><​sup>​2</​sup>​ + B<​sub>​01</​sub><​sup>​2</​sup>​ + A<​sub>​11</​sub><​sup>​2</​sup>​ + B<​sub>​11</​sub><​sup>​2</​sup>​ and Q<​sub>​2</​sub>​ = 2 (B<​sub>​01</​sub>​ A<​sub>​11</​sub>​ --  A<​sub>​01</​sub>​ B<​sub>​11</​sub>​) | (7‑10) |
  
 and for merged Friedel pairs we get: and for merged Friedel pairs we get:
Line 1196: Line 1195:
 In addition the macros Out_F2_Details and Out_A01_A11_B01_B11 can be used to output F<​sup>​2</​sup>​ details. In addition the macros Out_F2_Details and Out_A01_A11_B01_B11 can be used to output F<​sup>​2</​sup>​ details.
  
-==== 7.3.3              ​Single crystal data ====+==== 7.3.3              ​Single crystal data ====
  
 SHELX HKL4 single crystal data comprise unmerged equivalent reflections and thus Eq. (7‑5b) is used for calculating F<​sup>​2</​sup>​. Equivalent reflections are merged by default and can be unmerged using the //​dont_merge_equivalent_reflections//​ keyword. For centrosymmetric structures, merging includes the merging of Friedel pairs and thus Eq. (7‑12) is used for calculating F<​sup>​2</​sup>​. For non-centrosymmetric structures, merging excludes the merging of Friedel pairs and thus (7‑5b) is used for calculating F<​sup>​2</​sup>​. The keyword //​dont_merge_Friedel_pairs//​ prevents the merging of Friedel pairs. The //​ignore_differences_in_Friedel_pairs//​ keyword forces the use of Eq. (7‑12) for calculating F<​sup>​2</​sup>​. The reserved parameter name Mobs returns the number of observed reflections belonging to a particular family of reflections. SHELX HKL4 single crystal data comprise unmerged equivalent reflections and thus Eq. (7‑5b) is used for calculating F<​sup>​2</​sup>​. Equivalent reflections are merged by default and can be unmerged using the //​dont_merge_equivalent_reflections//​ keyword. For centrosymmetric structures, merging includes the merging of Friedel pairs and thus Eq. (7‑12) is used for calculating F<​sup>​2</​sup>​. For non-centrosymmetric structures, merging excludes the merging of Friedel pairs and thus (7‑5b) is used for calculating F<​sup>​2</​sup>​. The keyword //​dont_merge_Friedel_pairs//​ prevents the merging of Friedel pairs. The //​ignore_differences_in_Friedel_pairs//​ keyword forces the use of Eq. (7‑12) for calculating F<​sup>​2</​sup>​. The reserved parameter name Mobs returns the number of observed reflections belonging to a particular family of reflections.
Line 1204: Line 1203:
 *.SCR data is typically generated from a powder pattern and comprises merged equivalent reflections including merged Friedel pairs. As a consequence Eq. (7‑12) is used for calculating F<​sup>​2</​sup>;​ any definitions of //​dont_merge_equivalent_reflections//,//​dont_merge_Friedel_pairs//​ and //​ignore_differences_in_Friedel_pairs//​ are ignored. *.SCR data is typically generated from a powder pattern and comprises merged equivalent reflections including merged Friedel pairs. As a consequence Eq. (7‑12) is used for calculating F<​sup>​2</​sup>;​ any definitions of //​dont_merge_equivalent_reflections//,//​dont_merge_Friedel_pairs//​ and //​ignore_differences_in_Friedel_pairs//​ are ignored.
  
-==== 7.3.4              ​The Flack parameter ====+==== 7.3.4              ​The Flack parameter ====
  
 For single crystal data and for non-centrosymmetric structures the Flack parameter (Flack, 1983) as implemented scales F<​sup>​2</​sup>​(**h**) and F­<​sup>​2</​sup>​(-**h**) as defined in Eq. (7‑13). For single crystal data and for non-centrosymmetric structures the Flack parameter (Flack, 1983) as implemented scales F<​sup>​2</​sup>​(**h**) and F­<​sup>​2</​sup>​(-**h**) as defined in Eq. (7‑13).
  
-| F<​sup>​2</​sup>​(**h**) ​ = Q<​sub>​1</​sub>​ + (1 -- 2 Flack) Q<​sub>​2</​sub>​ F<​sup>​2</​sup>​(-**h**) ​ = Q<​sub>​1</​sub>​ -- (1 -- 2 Flack) Q<​sub>​2</​sub>​ | (7‑13) |+| F<​sup>​2</​sup>​(**h**)  = Q<​sub>​1</​sub>​ + (1 -- 2 Flack) Q<​sub>​2</​sub>​ F<​sup>​2</​sup>​(-**h**)  = Q<​sub>​1</​sub>​ -- (1 -- 2 Flack) Q<​sub>​2</​sub>​ | (7‑13) |
  
 See the test example YLIDMA.INP. See the test example YLIDMA.INP.
  
-==== 7.3.5              ​Single Crystal Output ====+==== 7.3.5              ​Single Crystal Output ====
  
 The macro Out_Single_Crystal_Details,​ see below, outputs details for a single crystal refinement, see test example YLIDMA.INP. Mobs corresponds to the number of observed reflections belonging to a particular family of planes. When Friedel Pairs are not merged then there will be a different Mobs for **h** and --**h**. Phase symmetry is considered in the values for A01, B01, A11 and B11. The macro Out_Single_Crystal_Details,​ see below, outputs details for a single crystal refinement, see test example YLIDMA.INP. Mobs corresponds to the number of observed reflections belonging to a particular family of planes. When Friedel Pairs are not merged then there will be a different Mobs for **h** and --**h**. Phase symmetry is considered in the values for A01, B01, A11 and B11.
Line 1220: Line 1219:
 { {
  
-   phase_out file load out_record out_fmt out_eqn+   phase_out file load out_record out_fmt out_eqn
  
-   {+   {
  
-       "​%4.0f"​ = H;+       "​%4.0f"​ = H;
  
-       "​%4.0f"​ = K;+       "​%4.0f"​ = K;
  
-       "​%4.0f"​ = L;+       "​%4.0f"​ = L;
  
-       "​%4.0f"​ = Mobs;+       "​%4.0f"​ = Mobs;
  
-       "​%4.0f"​ = M;+       "​%4.0f"​ = M;
  
-       " %11.4f"​ = A01;+       " %11.4f"​ = A01;
  
-       " %11.4f"​ = A11;+       " %11.4f"​ = A11;
  
-       " %11.4f"​ = B01;+       " %11.4f"​ = B01;
  
-       " %11.4f"​ = B11;+       " %11.4f"​ = B11;
  
-       ' I_no_scale_pks+       ' I_no_scale_pks
  
-       ' ​  ​= Get(scale) Mobs (A01-B11)^2 + (B01+A11)^2;​ when+       ' ​  = Get(scale) Mobs (A01-B11)^2 + (B01+A11)^2;​ when
  
-       ' ​    ​ignore_differences_in_Friedel_pairs is NOT defined.+       ​'     ​ignore_differences_in_Friedel_pairs is NOT defined.
  
-       ' ​  ​= Get(scale) Mobs (A01^2 + B01^2 + A11^2 + B11^2); when+       ' ​  = Get(scale) Mobs (A01^2 + B01^2 + A11^2 + B11^2); when
  
-       ' ​    ​ignore_differences_in_Friedel_pairs IS defined+       ​'     ​ignore_differences_in_Friedel_pairs IS defined
  
-       ' If there are no scale_pks then:+       ' If there are no scale_pks then:
  
-       ' ​  ​I_no_scale_pks = I_after_scale_pks = Ycalc+       '   I_no_scale_pks = I_after_scale_pks = Ycalc
  
-       " %11.4f"​ = I_no_scale_pks;​+       " %11.4f"​ = I_no_scale_pks;​
  
-       " %11.4f"​ = I_after_scale_pks;​+       " %11.4f"​ = I_after_scale_pks;​
  
-       " %11.4f"​ = Ycalc;+       " %11.4f"​ = Ycalc;
  
-       " %11.4f"​ = Yobs;+       " %11.4f"​ = Yobs;
  
-       " %11.4f\n"​ = SigmaYobs;+       " %11.4f\n"​ = SigmaYobs;
  
-   }+   }
  
 } }
  
-===== 7.4        ​Large refinements with tens of 1000s of parameters =====+===== 7.4        ​Large refinements with tens of 1000s of parameters =====
  
 Refinements comprising many parameters and data points can be both slow and memory intensive. Computation speed is hindered by the **A** matrix dot products of Eq. (5‑5) and in the case of dense matrices memory usage in forming the full **A** matix can be prohibitive. The following keywords can be used to overcome these problems: Refinements comprising many parameters and data points can be both slow and memory intensive. Computation speed is hindered by the **A** matrix dot products of Eq. (5‑5) and in the case of dense matrices memory usage in forming the full **A** matix can be prohibitive. The following keywords can be used to overcome these problems:
Line 1284: Line 1283:
 A_matrix_elements_tollerance 0.00001 A_matrix_elements_tollerance 0.00001
  
-The //​approximate_A//​ keyword avoids the calculation of the **A** matrix dot products. ​ ​Typically ​more refinement iterations are required for convergence but in most large problems the time to convergence is greatly decreased (see for example AE14-APPROX-A.INP). Furthermore memory usage of the **A** matrix can be limited using //​A_matrix_memory_allowed_in_Mbytes//;​ this produces a sparse matrix, dependening on alloted memory, by removing small A<​sub>​ij</​sub>​ values.+The //​approximate_A//​ keyword avoids the calculation of the **A** matrix dot products. ​ Typically ​more refinement iterations are required for convergence but in most large problems the time to convergence is greatly decreased (see for example AE14-APPROX-A.INP). Furthermore memory usage of the **A** matrix can be limited using //​A_matrix_memory_allowed_in_Mbytes//;​ this produces a sparse matrix, dependening on alloted memory, by removing small A<​sub>​ij</​sub>​ values.
  
 Typically the calculation of the covariance matrix is impractical and hence errors can instead be determined using the bootstrap method. Typically the calculation of the covariance matrix is impractical and hence errors can instead be determined using the bootstrap method.
  
-===== 7.5        ​Space groups, hkls and symmetry operators =====+===== 7.5        ​Space groups, hkls and symmetry operators =====
  
 [//​space_group//​ $symbol] is used to define the space group where $symbol can be any space group symbol occurring in the file SGCOM5.CPP ( case insensitive),​ it can also be a space group number; here are some examples: [//​space_group//​ $symbol] is used to define the space group where $symbol can be any space group symbol occurring in the file SGCOM5.CPP ( case insensitive),​ it can also be a space group number; here are some examples:
Line 1300: Line 1299:
 space_group I_41/A_M_D space_group I_41/A_M_D
  
-space_group I_41/A_M_D:2  ​' defines second setting of I_41/A_M_D+space_group I_41/A_M_D:2  ' defines second setting of I_41/A_M_D
  
 space_group 206 space_group 206
  
-space_group 222:2         ' defines second setting of 222+space_group 222:2         ​' defines second setting of 222
  
 Symmetry operators are generated by SGCOM6.EXE and placed into a sg\*.sg file with a name similar to the name of the space group. Space group names containing the characters ‘/’ or ‘:’ are placed in files with names similar to the space group but with the characters replaced by ‘o’ and ‘q’ respectively. The reason for this is that file names containing these characters are not allowed on some operating systems. hkl generation uses information in the *.sg file. Symmetry operators are generated by SGCOM6.EXE and placed into a sg\*.sg file with a name similar to the name of the space group. Space group names containing the characters ‘/’ or ‘:’ are placed in files with names similar to the space group but with the characters replaced by ‘o’ and ‘q’ respectively. The reason for this is that file names containing these characters are not allowed on some operating systems. hkl generation uses information in the *.sg file.
  
-===== 7.6        ​Site identifying strings =====+===== 7.6        ​Site identifying strings =====
  
 Keywords such as //​operate_on_points//​ use a site identifying string; this string can contain the wild card character ’*’ and a negation character ’!’. The wild card character ‘*’ used in “O*” means that sites with names starting with ‘O’ are considered. In addition to using the wild card character, the site names can be explicitly written within double quotation marks. For example, consider the following segment: Keywords such as //​operate_on_points//​ use a site identifying string; this string can contain the wild card character ’*’ and a negation character ’!’. The wild card character ‘*’ used in “O*” means that sites with names starting with ‘O’ are considered. In addition to using the wild card character, the site names can be explicitly written within double quotation marks. For example, consider the following segment:
Line 1314: Line 1313:
 str str
  
-   site Pb1...+   site Pb1...
  
-   site S1 ...+   site S1 ...
  
-   site O1 ...+   site O1 ...
  
-   site O2 ...+   site O2 ...
  
-   site O31 ...+   site O31 ...
  
-   site O32 ...+   site O32 ...
  
-   site O4 ...+   site O4 ...
  
 Table 7‑2 shows some //​operate_on_points//​ strings and the corresponding sites identified for this particular example. Table 7‑2 shows some //​operate_on_points//​ strings and the corresponding sites identified for this particular example.
  
-**Table 7‑2** ​ ​Example ​//​operate_on_points//​ strings and the corresponding sites identified.+**Table 7‑2** ​ Example ​//​operate_on_points//​ strings and the corresponding sites identified.
  
 | **operate_on_points $sites:** | **Sites identified** | | **operate_on_points $sites:** | **Sites identified** |
Line 1340: Line 1339:
 | “O* !O1 !O2“ | O31, O32, O4 | | “O* !O1 !O2“ | O31, O32, O4 |
  
- + 
  
-===== 7.7        ​Occupancies and symmetry operators =====+===== 7.7        ​Occupancies and symmetry operators =====
  
 Only unique positions are generated from symmetry operators. Fully occupied sites therefore require site occupancy values of 1. A comparison of atomic positions is performed in the generation of the unique positions with a tolerance in fractional coordinates of 10<​sup>​-15</​sup>​. It is therefore necessary to enter fractions in the form of equations when entering fractional atomic coordinates that have recurring values such as 0.33333..., 0.666666... etc., for example, use Only unique positions are generated from symmetry operators. Fully occupied sites therefore require site occupancy values of 1. A comparison of atomic positions is performed in the generation of the unique positions with a tolerance in fractional coordinates of 10<​sup>​-15</​sup>​. It is therefore necessary to enter fractions in the form of equations when entering fractional atomic coordinates that have recurring values such as 0.33333..., 0.666666... etc., for example, use
Line 1352: Line 1351:
 //x// 0.33333 //y// 0.33333 //z// 0.66666 //x// 0.33333 //y// 0.33333 //z// 0.66666
  
-===== 7.8        ​Pawley and Le Bail extraction =====+===== 7.8        ​Pawley and Le Bail extraction =====
  
 For Pawley intensity extraction (see example PAWLEY1.INP) the following input segment can be used For Pawley intensity extraction (see example PAWLEY1.INP) the following input segment can be used
  
-hkl_Is  ​+hkl_Is  
  
-   space_group p-1+   space_group p-1
  
 For Le Bail intensity extraction (see example LEBAIL1.INP) the following input segment can be used For Le Bail intensity extraction (see example LEBAIL1.INP) the following input segment can be used
Line 1364: Line 1363:
 hkl_Is hkl_Is
  
-   lebail 1+   lebail 1
  
-   space_group p-1+   space_group p-1
  
 hkls are generated if there are no //​hkl_m_d_th2//​ and //I// keywords defined. After refinement, the details for the generated hkl’s are appended after the //​space_group//​ keyword. For the Pawley method, once the hkl details are generated, parameter equations can be applied to the //I// parameters as usual. hkls are generated if there are no //​hkl_m_d_th2//​ and //I// keywords defined. After refinement, the details for the generated hkl’s are appended after the //​space_group//​ keyword. For the Pawley method, once the hkl details are generated, parameter equations can be applied to the //I// parameters as usual.
  
-===== 7.9        ​Anisotropic refinement models =====+===== 7.9        ​Anisotropic refinement models =====
  
 Keywords that can be a function of H, K, L and M, as shown in Table 3‑3, allow for the refinement of anisotropic models including preferred orientation,​ and peak broadening. An important consideration when dealing with hkls in equations is whether to work with hkls or whether to work with their multiplicities. The Multiplicities_Sum macro can be used when working with multiplicities,​ for example: Keywords that can be a function of H, K, L and M, as shown in Table 3‑3, allow for the refinement of anisotropic models including preferred orientation,​ and peak broadening. An important consideration when dealing with hkls in equations is whether to work with hkls or whether to work with their multiplicities. The Multiplicities_Sum macro can be used when working with multiplicities,​ for example:
Line 1382: Line 1381:
 A completely different viewpoint than to refine on half widths is to consider the distribution of lattice metric parameters within a sample. Each crystallite is regarded as having its own lattice parameters, with a multi-dimensional distribution throughout the powder sample. This can be achieved by adding the same structure several times to the input file. A completely different viewpoint than to refine on half widths is to consider the distribution of lattice metric parameters within a sample. Each crystallite is regarded as having its own lattice parameters, with a multi-dimensional distribution throughout the powder sample. This can be achieved by adding the same structure several times to the input file.
  
-==== 7.9.1              ​Second rank tensors ====+==== 7.9.1              ​Second rank tensors ====
  
 Anisotropic peak broadening using the Cagliotti relation: Anisotropic peak broadening using the Cagliotti relation:
Line 1400: Line 1399:
 { {
  
-//prm// u 0    ​min -1 max 2+//prm// u 0    ​min -1 max 2
  
-//prm// v 0    ​min -1 max 1+//prm// v 0    ​min -1 max 1
  
 //prm// w 0.01 min -1 max 2 //prm// w 0.01 min -1 max 2
  
-//prm// uc11 uv11  ​//prm// uc22 uv22  ​//prm// uc33 uv33+//prm// uc11 uv11  ​//prm// uc22 uv22  ​//prm// uc33 uv33
  
 ... ...
Line 1412: Line 1411:
 u = ( u = (
  
-     H^2   A_star A_star uc11 ++     ​H^2   A_star A_star uc11 +
  
 ... ...
Line 1418: Line 1417:
 v = ( v = (
  
-     H^2   A_star A_star vc11 ++     ​H^2   A_star A_star vc11 +
  
 ... ...
Line 1430: Line 1429:
 According to Le Bail & Jouanneaux (1997) the following symmetry restrictions have to be considered: According to Le Bail & Jouanneaux (1997) the following symmetry restrictions have to be considered:
  
-Cubic          ​: 11=22=33, 12=13=23 Hexagonal ​Trigonal ​      : 11=22, ​     13=23 Tetragonal | Orthorhombic ​Monoclinic ​  : None Triclinic |+Cubic          ​: 11=22=33, 12=13=23 Hexagonal ​Trigonal       ​: 11=22, ​     ​13=23 Tetragonal | Orthorhombic ​Monoclinic   ​: None Triclinic |
  
 An analogous variation may also be applied to peak shapes, so a maximum of 36 refineable parameters is obtained. An analogous variation may also be applied to peak shapes, so a maximum of 36 refineable parameters is obtained.
Line 1436: Line 1435:
 As the Cagliotti relation is a poor performer in describing half width dependence on 2q for X-ray data and as the extremely high parameter number will not allow for stable and reliable refinements,​ the examples outlined in sections 7.9.2 and 7.9.3 should be preferred as a base for describing anisotropic peak broadening. As the Cagliotti relation is a poor performer in describing half width dependence on 2q for X-ray data and as the extremely high parameter number will not allow for stable and reliable refinements,​ the examples outlined in sections 7.9.2 and 7.9.3 should be preferred as a base for describing anisotropic peak broadening.
  
-==== 7.9.2              ​Spherical harmonics ====+==== 7.9.2              ​Spherical harmonics ====
  
 The //​spherical_harmonics_hkl//​ keyword can be applied to both peak shapes for anisotropy and intensities for a preferred orientation correction. Preferred orientation can be described using the PO_Spherical_Harmonics(sh,​ order) macro, where "​sh"​ is the parameter name and "​order"​ the order of the spherical harmonics. The //​scale_pks//​ keyword is used to correct peak intensities. The //​spherical_harmonics_hkl//​ keyword can be applied to both peak shapes for anisotropy and intensities for a preferred orientation correction. Preferred orientation can be described using the PO_Spherical_Harmonics(sh,​ order) macro, where "​sh"​ is the parameter name and "​order"​ the order of the spherical harmonics. The //​scale_pks//​ keyword is used to correct peak intensities.
Line 1444: Line 1443:
 { {
  
-   spherical_harmonics_hkl sh+   spherical_harmonics_hkl sh
  
-      ​sh_order order+      ​sh_order order
  
-      ​scale_pks = sh;     ​+      ​scale_pks = sh;     
  
 } }
Line 1456: Line 1455:
 str... str...
  
-   spherical_harmonics_hkl sh+   spherical_harmonics_hkl sh
  
-      ​sh_order ​8  ​+      ​sh_order ​8  
  
-   exp_conv_const = (sh-1) Tan(Th);+   exp_conv_const = (sh-1) Tan(Th);
  
-==== 7.9.3              ​Miscellaneous models using User defined equations ====+==== 7.9.3              ​Miscellaneous models using User defined equations ====
  
 Anisotropic Gaussian convolution broadening as a function of L (see example ceo2hkl.inp):​ Anisotropic Gaussian convolution broadening as a function of L (see example ceo2hkl.inp):​
Line 1468: Line 1467:
 str... str...
  
-   prm a 0.1 min 0.0001 max 5+   prm a 0.1 min 0.0001 max 5
  
-   prm b 0.1 min 0.0001 max 5+   prm b 0.1 min 0.0001 max 5
  
-   gauss_fwhm = If(L==0, a Tan(Th) + .2, b Tan(Th));+   gauss_fwhm = If(L==0, a Tan(Th) + .2, b Tan(Th));
  
 Anisotropic peak shifts as a function of L (//​th2_offset//​):​ Anisotropic peak shifts as a function of L (//​th2_offset//​):​
Line 1478: Line 1477:
 str... str...
  
-   prm at 0.07 min 0.0001 max 1+   prm at 0.07 min 0.0001 max 1
  
-   prm bt 0.07 min 0.0001 max 1+   prm bt 0.07 min 0.0001 max 1
  
-   th2_offset = If(L==0, at Tan(Th), bt Tan(Th));+   th2_offset = If(L==0, at Tan(Th), bt Tan(Th));
  
 Description of anisotropic peak broadening using the March (1932) relation and //​str_hkl_angle//:​ Description of anisotropic peak broadening using the March (1932) relation and //​str_hkl_angle//:​
Line 1488: Line 1487:
 str... str...
  
-   str_hkl_angle ang1 1 0 0+   str_hkl_angle ang1 1 0 0
  
-   prm p1 1    ​min 0.0001 max 2+   prm p1 1    ​min 0.0001 max 2
  
-   prm p2 0.01 min 0.0001 max 0.1+   prm p2 0.01 min 0.0001 max 0.1
  
-   lor_fwhm = p2 Tan(Th) Multiplicities_Sum%%(((%%p1^2 Cos(ang1)^2 ++   lor_fwhm = p2 Tan(Th) Multiplicities_Sum%%(((%%p1^2 Cos(ang1)^2 +
  
-              Sin(ang1)^2 / p1)^(-1.5)));​+              Sin(ang1)^2 / p1)^(-1.5)));​
  
-===== 7.10  ​Rigid bodies and bond length restraints =====+===== 7.10  Rigid bodies and bond length restraints =====
  
 Rigid bodies comprise points in space defined using either the //​z_matrix//​ or //​point_for_site//​ keywords or both simultaneously. All or some of these points can then be operated on using the //rotate// and //​translate//​ keywords. Rigid bodies comprise points in space defined using either the //​z_matrix//​ or //​point_for_site//​ keywords or both simultaneously. All or some of these points can then be operated on using the //rotate// and //​translate//​ keywords.
Line 1508: Line 1507:
   * Rotating a rigid body or part of a rigid body around a line.   * Rotating a rigid body or part of a rigid body around a line.
  
-//ua//, //ub//, and //​uc// ​ ​of ​the //​point_for_site//​ keyword, //ta//, //tb// and //tc// of the //​translate//​ keyword, //qa//, //qb// and //qc// of the //rotate// keyword and the parameters of the //​z_matrix//​ keyword are all refineable parameters. This means that parameter attributes such as //min/max// can be defined.+//ua//, //ub//, and //​uc// ​ of the //​point_for_site//​ keyword, //ta//, //tb// and //tc// of the //​translate//​ keyword, //qa//, //qb// and //qc// of the //rotate// keyword and the parameters of the //​z_matrix//​ keyword are all refineable parameters. This means that parameter attributes such as //min/max// can be defined.
  
 The following Web addresses further describes the use of Z-matrices: The following Web addresses further describes the use of Z-matrices:
Line 1518: Line 1517:
 The directory RIGID contains rigid body examples in *.RGD files. These files can be viewed and modified using the Rigid-Body-Editor of the GUI. The directory RIGID contains rigid body examples in *.RGD files. These files can be viewed and modified using the Rigid-Body-Editor of the GUI.
  
-==== 7.10.1          ​Fractional, Cartesian and Z-matrix coordinates ====+==== 7.10.1          ​Fractional, Cartesian and Z-matrix coordinates ====
  
 The most basic means of setting up a rigid body is by means of fractional or Cartesian coordinates. A Benzene ring for example without Hydrogens can be formulated as follows: The most basic means of setting up a rigid body is by means of fractional or Cartesian coordinates. A Benzene ring for example without Hydrogens can be formulated as follows:
Line 1526: Line 1525:
 [[#​k069|rigid]] [[#​k069|rigid]]
  
-   point_for_site C1 ux =  a Sqrt(3) .5; uy =  a .5;+   point_for_site C1 ux =  a Sqrt(3) .5; uy =  a .5;
  
-   point_for_site C2 ux =  a Sqrt(3) .5; uy = -a .5;+   point_for_site C2 ux =  a Sqrt(3) .5; uy = -a .5;
  
-   point_for_site C3 ux = -a Sqrt(3) .5; uy =  a .5;+   point_for_site C3 ux = -a Sqrt(3) .5; uy =  a .5;
  
-   point_for_site C4 ux = -a Sqrt(3) .5; uy = -a .5;+   point_for_site C4 ux = -a Sqrt(3) .5; uy = -a .5;
  
-   point_for_site C5 uy =  a;+   point_for_site C5 uy =  a;
  
-   point_for_site C6 uy = -a;+   point_for_site C6 uy = -a;
  
- + 
  
-   ‘ rotate all previously defined points:+   ‘ rotate all previously defined points:
  
-   Rotate_about_axies(@ 0, @ 0, @ 0)+   Rotate_about_axies(@ 0, @ 0, @ 0)
  
- + 
  
-   ‘ translate all previously defined points:+   ‘ translate all previously defined points:
  
-   Translate(@ .1, @ .2, @ .3)+   Translate(@ .1, @ .2, @ .3)
  
 The last two statements rotates and translates the rigid body as a whole and their inclusion are implied if absent in the following examples. The last two statements rotates and translates the rigid body as a whole and their inclusion are implied if absent in the following examples.
Line 1558: Line 1557:
 str... str...
  
-   site X1... occ C 0+   site X1... occ C 0
  
-   site X2... occ C 0+   site X2... occ C 0
  
-   rigid+   rigid
  
-      ​load z_matrix {+      ​load z_matrix {
  
-         X1+         ​X1
  
-         ​X2 ​  ​X1  ​1.0+         X2   X1  1.0
  
-         ​C1 ​  ​X2  ​1.3   ​X1  ​90+         C1   X2  1.3   X1  90
  
-         ​C2 ​  ​X2  ​1.3   ​X1 ​ 90  C1  ​60.0+         C2   X2  1.3   X1  90  C1  60.0
  
-         ​C3 ​  ​X2  ​1.3   ​X1 ​ 90  C2  ​60.0+         C3   X2  1.3   X1  90  C2  60.0
  
-         ​C4 ​  ​X2  ​1.3   ​X1 ​ 90  C3  ​60.0+         C4   X2  1.3   X1  90  C3  60.0
  
-         ​C5 ​  ​X2  ​1.3   ​X1 ​ 90  C4  ​60.0+         C5   X2  1.3   X1  90  C4  60.0
  
-         ​C6 ​  ​X2  ​1.3   ​X1 ​ 90  C5  ​60.0+         C6   X2  1.3   X1  90  C5  60.0
  
-      ​}+      ​}
  
 Atoms with occupancies fixed to zero (dummy atoms) do not take part in structure factor calculations. Importantly however dummy atoms can take part in penalties. The mixing of //​point_for_site//​ and //​z_matrix//​ keywords is possible as follows: Atoms with occupancies fixed to zero (dummy atoms) do not take part in structure factor calculations. Importantly however dummy atoms can take part in penalties. The mixing of //​point_for_site//​ and //​z_matrix//​ keywords is possible as follows:
Line 1588: Line 1587:
 rigid rigid
  
-   point_for_site X1+   point_for_site X1
  
-   load z_matrix {+   load z_matrix {
  
-      X2   ​X1  ​1.0+      X2   X1  1.0
  
-      C1   ​X2  ​1.3   ​X1  ​90+      C1   X2  1.3   X1  90
  
 ... ...
  
-   }+   }
  
 Z-matrix parameters are like any other parameter; they can be equations and parameter attributes can be assigned. For example, the 1.3 bond distance can be refined as follows: Z-matrix parameters are like any other parameter; they can be equations and parameter attributes can be assigned. For example, the 1.3 bond distance can be refined as follows:
Line 1604: Line 1603:
 rigid rigid
  
-   point_for_site X1+   point_for_site X1
  
-   load z_matrix {+   load z_matrix {
  
-      X2   ​X1  ​1.0+      X2   X1  1.0
  
-      C1   ​X2  ​c1c2 1.3 min 1.2 max 1.4  X1  ​90+      C1   X2  c1c2 1.3 min 1.2 max 1.4  X1  90
  
-      C2   ​X2  ​=c1c2; ​  X1  90  C1  ​60.0+      C2   X2  =c1c2;   X1  90  C1  60.0
  
-      C3   ​X2  ​=c1c2; ​  X1  90  C2  ​60.0+      C3   X2  =c1c2;   X1  90  C2  60.0
  
-      C4   ​X2  ​=c1c2; ​  X1  90  C3  ​60.0+      C4   X2  =c1c2;   X1  90  C3  60.0
  
-      C5   ​X2  ​=c1c2; ​  X1  90  C4  ​60.0+      C5   X2  =c1c2;   X1  90  C4  60.0
  
-      C6   ​X2  ​=c1c2; ​  X1  90  C5  ​60.0+      C6   X2  =c1c2;   X1  90  C5  60.0
  
-   }+   }
  
 This ability to constrain Z-matrix parameters through the use of equations allows for great flexibility. Example use of such equations could involve writing a particular Z-matrix bond length parameter in terms of other bond length parameters whereby the average bond length is maintained. Or, in cases where a bond length is expected to change as a function of a site occupancy then an equation relating the bond length as a function of the site occupancy parameter can be formulated. This ability to constrain Z-matrix parameters through the use of equations allows for great flexibility. Example use of such equations could involve writing a particular Z-matrix bond length parameter in terms of other bond length parameters whereby the average bond length is maintained. Or, in cases where a bond length is expected to change as a function of a site occupancy then an equation relating the bond length as a function of the site occupancy parameter can be formulated.
  
-==== 7.10.2          ​Translating part of a rigid body ====+==== 7.10.2          ​Translating part of a rigid body ====
  
 Once a starting rigid body model is defined, further //​translate//​ and //rotate// statements can be included to represent deviations from the starting model. For example, if the C1 and C2 atoms are expected to shift by up to 0.1Å and as a unit then the following could be used: Once a starting rigid body model is defined, further //​translate//​ and //rotate// statements can be included to represent deviations from the starting model. For example, if the C1 and C2 atoms are expected to shift by up to 0.1Å and as a unit then the following could be used:
Line 1632: Line 1631:
 rigid rigid
  
-   load z_matrix {+   load z_matrix {
  
-      ​X1+      ​X1
  
-      X2   ​X1  ​1.0+      X2   X1  1.0
  
-      C1   ​X2  ​1.3   ​X1  ​90+      C1   X2  1.3   X1  90
  
-      C2   ​X2  ​1.3   ​X1 ​ 90  C1  ​60.0+      C2   X2  1.3   X1  90  C1  60.0
  
-      C3   ​X2  ​1.3   ​X1 ​ 90  C2  ​60.0+      C3   X2  1.3   X1  90  C2  60.0
  
-      C4   ​X2  ​1.3   ​X1 ​ 90  C3  ​60.0+      C4   X2  1.3   X1  90  C3  60.0
  
-      C5   ​X2  ​1.3   ​X1 ​ 90  C4  ​60.0+      C5   X2  1.3   X1  90  C4  60.0
  
-      C6   ​X2  ​1.3   ​X1 ​ 90  C5  ​60.0+      C6   X2  1.3   X1  90  C5  60.0
  
-   }+   }
  
-**   ​translate**+**   translate**
  
-**      tx @ 0 min -.1 max .1**+**      ​tx @ 0 min -.1 max .1**
  
-**      ty @ 0 min -.1 max .1**+**      ​ty @ 0 min -.1 max .1**
  
-**      tz @ 0 min -.1 max .1**+**      ​tz @ 0 min -.1 max .1**
  
-**      operate_on_points "C1 C2"**+**      ​operate_on_points "C1 C2"**
  
 where the additional statements are outlined in bold. The Cartesian coordinate representation allows an additional means of shifting the C1 and C2 atoms by refining on the //ux//, //uy// and //uz// coordinates directly, or, where the additional statements are outlined in bold. The Cartesian coordinate representation allows an additional means of shifting the C1 and C2 atoms by refining on the //ux//, //uy// and //uz// coordinates directly, or,
Line 1674: Line 1673:
 rigid rigid
  
-   point_for_site C1 ux =  a Sqrt(3) .5 **+ t1**; uy =  a .5 **+ t2**; **uz = t3;**+   point_for_site C1 ux =  a Sqrt(3) .5 **+ t1**; uy =  a .5 **+ t2**; **uz = t3;**
  
-   point_for_site C2 ux =  a Sqrt(3) .5 **+ t1**; uy = -a .5 **+ t2**; **uz = t3;**+   point_for_site C2 ux =  a Sqrt(3) .5 **+ t1**; uy = -a .5 **+ t2**; **uz = t3;**
  
-   point_for_site C3 ux = -a Sqrt(3) .5;      uy =  a .5;+   point_for_site C3 ux = -a Sqrt(3) .5;      ​uy =  a .5;
  
-   point_for_site C4 ux = -a Sqrt(3) .5;      uy = -a .5;+   point_for_site C4 ux = -a Sqrt(3) .5;      ​uy = -a .5;
  
-   point_for_site ​C5                          ​uy =  a;+   point_for_site ​C5                          ​uy =  a;
  
-   point_for_site ​C6                          ​uy = -a;+   point_for_site ​C6                          ​uy = -a;
  
-==== 7.10.3          ​Rotating part of a rigid body around a point ====+==== 7.10.3          ​Rotating part of a rigid body around a point ====
  
 Many situations require the rotation of part of a rigid body around a point. An octahedra (Fig. 7‑1) for example typically rotates around the central atom with three degrees of freedom. To implement such a rotation when the central atom is arbitrarily placed requires setting the origin at the central atom before rotation and then resetting the origin after rotation. This is achieved using the Translate_point_amount macro as follows: Many situations require the rotation of part of a rigid body around a point. An octahedra (Fig. 7‑1) for example typically rotates around the central atom with three degrees of freedom. To implement such a rotation when the central atom is arbitrarily placed requires setting the origin at the central atom before rotation and then resetting the origin after rotation. This is achieved using the Translate_point_amount macro as follows:
Line 1696: Line 1695:
 point_for_site A0 point_for_site A0
  
-point_for_site A1 ux =  r;+point_for_site A1 ux =  r;
  
 point_for_site A2 ux = -r; point_for_site A2 ux = -r;
  
-point_for_site A3 uy =  r;+point_for_site A3 uy =  r;
  
 point_for_site A4 uy = -r; point_for_site A4 uy = -r;
  
-point_for_site A5 uz =  r;+point_for_site A5 uz =  r;
  
 point_for_site A6 uz = -r; point_for_site A6 uz = -r;
Line 1720: Line 1719:
 The //​point_for_site//​ keywords could just as well be //​z_matrix//​ keywords with the appropriate Z-matrix parameters. The first Translate_point_amount statement translates the specified points (A1 to A6) by an amount equivalent to the negative position of A0. This effectively sets the origin for these points to A0. The second Translate_point_amount resets the origin back to A0. If the A0 atom happens to be at Cartesian (0, 0, 0) then there would be no need for the Translate_point_amount statements. The //​point_for_site//​ keywords could just as well be //​z_matrix//​ keywords with the appropriate Z-matrix parameters. The first Translate_point_amount statement translates the specified points (A1 to A6) by an amount equivalent to the negative position of A0. This effectively sets the origin for these points to A0. The second Translate_point_amount resets the origin back to A0. If the A0 atom happens to be at Cartesian (0, 0, 0) then there would be no need for the Translate_point_amount statements.
  
-| {{Technical_Reference%20V4-1_files:​image140.gif?​277x198}} |     ​Fig. 7‑1  ​Model of an ideal octahedron. A0: central atom A0; A1 to A6: outer atoms. |+| {{techref_files:​image140.gif?​277x198}} |     Fig. 7‑1  Model of an ideal octahedron. A0: central atom A0; A1 to A6: outer atoms. |
  
 Further distortions are possible by refining on different bond-lengths between the central atom and selected outer atoms. For example, the following macro describes an orthorhombic bipyramid: Further distortions are possible by refining on different bond-lengths between the central atom and selected outer atoms. For example, the following macro describes an orthorhombic bipyramid:
Line 1728: Line 1727:
 { {
  
-   point_for_site s0+   point_for_site s0
  
-   point_for_site s1 ux   r1+   point_for_site s1 ux   ​r1
  
-   point_for_site s2 ux  ​--r1+   point_for_site s2 ux  --r1
  
-   point_for_site s3 uy   r1+   point_for_site s3 uy   ​r1
  
-   point_for_site s4 uy  ​--r1+   point_for_site s4 uy  --r1
  
-   point_for_site s5 uz   r2+   point_for_site s5 uz   ​r2
  
-   point_for_site s6 uz  ​--r2+   point_for_site s6 uz  --r2
  
 } }
Line 1746: Line 1745:
 Note the two different lengths r1 and r2; with r1 = r2 this macro would describe a regular octahedron. Note the two different lengths r1 and r2; with r1 = r2 this macro would describe a regular octahedron.
  
-==== 7.10.4          ​Rotating part of a rigid body around a line ====+==== 7.10.4          ​Rotating part of a rigid body around a line ====
  
 Rigid bodies can be created by using the //rotate// and //​translate//​ keywords instead of explicitly entering fractional or Cartesian coordinates. For example, two connected Benzene rings, a schematic without Hydrogens is shown in Fig. 7‑2, can be formulated as follows: Rigid bodies can be created by using the //rotate// and //​translate//​ keywords instead of explicitly entering fractional or Cartesian coordinates. For example, two connected Benzene rings, a schematic without Hydrogens is shown in Fig. 7‑2, can be formulated as follows:
Line 1754: Line 1753:
 rigid rigid
  
-   point_for_site C1 ux = r;+   point_for_site C1 ux = r;
  
-   load point_for_site ux rotate qz operate_on_points {+   load point_for_site ux rotate qz operate_on_points {
  
-      ​C2 =r; 60  ​1 C2+      ​C2 =r; 60  1 C2
  
-      ​C3 =r; 120 1 C3+      ​C3 =r; 120 1 C3
  
-      ​C4 =r; 180 1 C4+      ​C4 =r; 180 1 C4
  
-      ​C5 =r; 240 1 C5+      ​C5 =r; 240 1 C5
  
-      ​C6 =r; 300 1 C6+      ​C6 =r; 300 1 C6
  
-   }+   }
  
-   point_for_site C7 ux = r;+   point_for_site C7 ux = r;
  
-   load point_for_site ux rotate qz operate_on_points {+   load point_for_site ux rotate qz operate_on_points {
  
-      C8  ​=r; 60  ​1 C8+      C8  =r; 60  1 C8
  
-      C9  ​=r; 120 1 C9+      C9  =r; 120 1 C9
  
-      ​C10 =r; 300 1 C10 +      ​C10 =r; 300 1 C10 
  
-   }+   }
  
-   translate tx = 1.5 r; ty = r Sin(60 Deg);+   translate tx = 1.5 r; ty = r Sin(60 Deg);
  
-      ​operate_on_points "C7 C8 C9 C10"+      ​operate_on_points "C7 C8 C9 C10"
  
 The points of the second ring can be rotated around the line connecting C1 to C2 with the following: The points of the second ring can be rotated around the line connecting C1 to C2 with the following:
Line 1798: Line 1797:
 Similar Rotate_about_points statements for each atom would allow for distortions of the Benzene rings without changing bond distances. Similar Rotate_about_points statements for each atom would allow for distortions of the Benzene rings without changing bond distances.
  
-| {{Technical_Reference%20V4-1_files:​image142.gif?​240x150}} |   ​Fig. 7‑2. Model of two connected Benzene rings |+| {{techref_files:​image142.gif?​240x150}} |   Fig. 7‑2. Model of two connected Benzene rings |
  
-==== 7.10.5          ​Benefits of using Z-matrix together with rotate and translate ====+==== 7.10.5          ​Benefits of using Z-matrix together with rotate and translate ====
  
 Cyclopentadienyl (C5H5)- is a well defined molecular fragment which shows slight deviation from a perfect five-fold ring (Fig. 7‑3). The rigid body definition using //​point_for_site//​ keywords is as follows: Cyclopentadienyl (C5H5)- is a well defined molecular fragment which shows slight deviation from a perfect five-fold ring (Fig. 7‑3). The rigid body definition using //​point_for_site//​ keywords is as follows:
Line 1814: Line 1813:
 load point_for_site ux { H1 =r2; H2 =r2; H3 =r2; H4 =r2; H5 =r2; } load point_for_site ux { H1 =r2; H2 =r2; H3 =r2; H4 =r2; H5 =r2; }
  
-load rotate qz operate_on_points {  72 1 C2  ​144 1 C3 +load rotate qz operate_on_points {  72 1 C2  144 1 C3 
  
-                                   216 1 C4  ​288 1 C5 }+                                   ​216 1 C4  288 1 C5 }
  
-load rotate qz operate_on_points {  72 1 H2  ​144 1 H3 +load rotate qz operate_on_points {  72 1 H2  144 1 H3 
  
-                                   216 1 H4  ​288 1 H5 }+                                   ​216 1 H4  288 1 H5 }
  
 and using a typical Z-matrix representation:​ and using a typical Z-matrix representation:​
Line 1830: Line 1829:
 X1 X1
  
-X2   X1 1+X2   ​X1 1
  
-C1   X2 1.19   ​X1  ​90+C1   ​X2 1.19   X1  90
  
-C2   X2 1.19   ​X1 ​ 90   ​C1  ​72+C2   ​X2 1.19   X1  90   C1  72
  
-C3   X2 1.19   ​X1 ​ 90   ​C2  ​72+C3   ​X2 1.19   X1  90   C2  72
  
-C4   X2 1.19   ​X1 ​ 90   ​C3  ​72+C4   ​X2 1.19   X1  90   C3  72
  
-C5   X2 1.19   ​X1 ​ 90   ​C4  ​72+C5   ​X2 1.19   X1  90   C4  72
  
-X3   C1 1      X2  90   ​X1 ​  0+X3   ​C1 1      X2  90   X1   ​0
  
-H1   C1 1.05   ​X3 ​ 90   X2 180+H1   ​C1 1.05   X3  90   ​X2 180
  
-H2   C2 1.05   C1 126   X2 180+H2   ​C2 1.05   ​C1 126   ​X2 180
  
-H3   C3 1.05   C2 126   X2 180+H3   ​C3 1.05   ​C2 126   ​X2 180
  
-H4   C4 1.05   C3 126   X2 180+H4   ​C4 1.05   ​C3 126   ​X2 180
  
-H5   C5 1.05   C4 126   X2 180+H5   ​C5 1.05   ​C4 126   ​X2 180
  
 } }
Line 1862: Line 1861:
 Thus the ability to include //rotate// and //​translate//​ statements together with //​z_matrix//​ keyword gives greater flexibility in defining rigid bodies. Thus the ability to include //rotate// and //​translate//​ statements together with //​z_matrix//​ keyword gives greater flexibility in defining rigid bodies.
  
-| {{Technical_Reference%20V4-1_files:​image144.gif?​239x233}} |       ​Fig. 7‑3. Model of the idealized cyclopentadienyl anion (C<​sub>​5</​sub>​H<​sub>​5</​sub>​).. |+| {{techref_files:​image144.gif?​239x233}} |       Fig. 7‑3. Model of the idealized cyclopentadienyl anion (C<​sub>​5</​sub>​H<​sub>​5</​sub>​).. |
  
- + 
  
-==== 7.10.6          ​The simplest of rigid bodies ====+==== 7.10.6          ​The simplest of rigid bodies ====
  
 The simplest rigid body comprises an atom constrained to move within a sphere; for a radius of 1 then this can be can be achieved as follows: The simplest rigid body comprises an atom constrained to move within a sphere; for a radius of 1 then this can be can be achieved as follows:
Line 1886: Line 1885:
 rigid rigid
  
-z_matrix ​A              ​' line 1+z_matrix ​A              ​' line 1
  
-z_matrix B A 2          ​' line 2+z_matrix B A 2          ​' line 2
  
-rotate @ 20 qa 1        ' line 3+rotate @ 20 qa 1        ​' line 3
  
-rotate @ 20 qb 1        ' line 4+rotate @ 20 qb 1        ​' line 4
  
-translate ta @ .1 tb @ .2 tc @ .3  ​' line 5+translate ta @ .1 tb @ .2 tc @ .3  ' line 5
  
 In Cartesian form In Cartesian form
Line 1900: Line 1899:
 rigid rigid
  
-point_for_site A           ​' line 1+point_for_site A           ​' line 1
  
-point_for_site B uz 2      ' line 2+point_for_site B uz 2      ​' line 2
  
-rotate @ 20 qa 1           ​' line 3+rotate @ 20 qa 1           ​' line 3
  
-rotate @ 20 qb 1           ​' line 4+rotate @ 20 qb 1           ​' line 4
  
-translate ta @ .1 tb @ .2 tc @ .3   ​' line 5+translate ta @ .1 tb @ .2 tc @ .3   ' line 5
  
 Lines 1 and 2 defines the two points (note that //ux//, //uy// and //uz// defaults to 0), line 3 and 4 rotates the two points around the **//a//** lattice vector and then the **//b//** lattice vector respectively and line 5 translates the two points to a position in fractional atomic coordinates of (.1, .2, .3). Lines 3 to 5 contain the five parameters associated with this rigid body. Lines 1 and 2 defines the two points (note that //ux//, //uy// and //uz// defaults to 0), line 3 and 4 rotates the two points around the **//a//** lattice vector and then the **//b//** lattice vector respectively and line 5 translates the two points to a position in fractional atomic coordinates of (.1, .2, .3). Lines 3 to 5 contain the five parameters associated with this rigid body.
Line 1924: Line 1923:
 Note, this macro defines the distance between the two sites as a parameter that can be refined. Note, this macro defines the distance between the two sites as a parameter that can be refined.
  
-==== 7.10.7          ​Generation of rigid bodies ====+==== 7.10.7          ​Generation of rigid bodies ====
  
 A rigid body is constructed by the sequential processing of //​z_matrix//,​ //​point_for_site//,​ //rotate// and //​translate//​ operations. The body is then converted to fractional atomic coordinates and then symmetry operations of the space group applied. A rigid body is constructed by the sequential processing of //​z_matrix//,​ //​point_for_site//,​ //rotate// and //​translate//​ operations. The body is then converted to fractional atomic coordinates and then symmetry operations of the space group applied.
Line 1949: Line 1948:
  
 operate_on_points "Si* O* !O2" operate_on_points "Si* O* !O2"
-===== 7.11  Simulated annealing and structure determination ===== 
  
-Defining //​[[#​k016|continue]][[#​k016|_after_convergence]]//​ and a [[#​k086|temperature regime]] is analogous to defining a simulated annealing process. After convergence a new [[#​k144|refinement cycle]] is initiated with parameter values changed according to any defined //​[[#​x000|val_on_continue]]//​ attributes and //​[[#​k128|rand_xyz]]//​ or //​[[#​k133|randomize_on_errors]]//​ processes. Thus simulated annealing is not specific to structure solution, see for example ONLYPENA.INP ​and  ​ROSENBROCK-10.INP+===== 7.11  Simulated annealing and structure determination ===== 
 + 
 +Defining //​[[#​k016|continue]][[#​k016|_after_convergence]]//​ and a [[#​k086|temperature regime]] is analogous to defining a simulated annealing process. After convergence a new [[#​k144|refinement cycle]] is initiated with parameter values changed according to any defined //​[[#​x000|val_on_continue]]//​ attributes and //​[[#​k128|rand_xyz]]//​ or //​[[#​k133|randomize_on_errors]]//​ processes. Thus simulated annealing is not specific to structure solution, see for example ONLYPENA.INP ​and  ROSENBROCK-10.INP
  
 In regards to structure solution in real space, the need for computation efficiency is critical. In many cases computation speed can be increased by up to a factor of 20 or more with the appropriate choice of keywords. Keywords that facilitate speed are as follows: In regards to structure solution in real space, the need for computation efficiency is critical. In many cases computation speed can be increased by up to a factor of 20 or more with the appropriate choice of keywords. Keywords that facilitate speed are as follows:
Line 1961: Line 1961:
 [[#​k100|yobs_to_xo_posn_yobs]] !E [[#​k100|yobs_to_xo_posn_yobs]] !E
  
-Another category is one that facilitate structure solution by changing the form of <​sub>​{{Technical_Reference%20V4-1_files:​image002.gif?​20x23}}</​sub>:​+Another category is one that facilitate structure solution by changing the form of <​sub>​{{techref_files:​image002.gif?​20x23}}</​sub>:​
  
 penalties_weighting_K1 !E penalties_weighting_K1 !E
Line 1981: Line 1981:
 temperature !E... temperature !E...
  
-   move_to_the_next_temperature_regardless_of_the_change_in_rwp+   move_to_the_next_temperature_regardless_of_the_change_in_rwp
  
-   save_values_as_best_after_randomization+   save_values_as_best_after_randomization
  
-   use_best_values+   use_best_values
  
-   do_processes+   do_processes
  
 xdd... or xdd_scr... xdd... or xdd_scr...
  
-   str...+   str...
  
-      ​site ... rand_xyz...+      ​site ... rand_xyz...
  
-      ​break_if_been_there+      ​break_if_been_there
  
-      ​try_site_patterns...+      ​try_site_patterns...
  
-==== 7.11.1          ​Penalties used in structure determination ====+==== 7.11.1          ​Penalties used in structure determination ====
  
-Introducing suitable penalty functions can reduce the number of local minima in <​sub>​{{Technical_Reference%20V4-1_files:​image002.gif?​20x23}}</​sub><​sup>​ </​sup>​and correspondingly increase the chances of obtaining a global minimum. The structure factor for a reflection with Miller indices 10 0 0 for a two atom triclinic unit cell with fractional atomic coordinates of (0,0,0) and (//x//, 0,0) is given by 4 cos(ph//​x//​)<​sup>​2</​sup>;​ here there are 10 local minima for 0<//​x//<​1. If it was known that the bond length distance is half the distance of the //a// lattice parameter then a suitable penalty function would reduce the number of minima to one. In this trivial example it can be seen that the number of minima increases as the Miller indices increase. For non-trivial structures and for the important d spacing range near inter-atomic distances of 1 to 2Å the number of local minima is very large. Bragg reflections with large Miller indices that are heavily weighted are expected to contain many false minima; by applying an appropriate weighting scheme to the diffraction data the search for the global minimum can be facilitated. For powder data the default //​[[#​k092|weighting]]//​ scheme is:+Introducing suitable penalty functions can reduce the number of local minima in <​sub>​{{techref_files:​image002.gif?​20x23}}</​sub><​sup>​ </​sup>​and correspondingly increase the chances of obtaining a global minimum. The structure factor for a reflection with Miller indices 10 0 0 for a two atom triclinic unit cell with fractional atomic coordinates of (0,0,0) and (//x//, 0,0) is given by 4 cos(ph//​x//​)<​sup>​2</​sup>;​ here there are 10 local minima for 0<//​x//<​1. If it was known that the bond length distance is half the distance of the //a// lattice parameter then a suitable penalty function would reduce the number of minima to one. In this trivial example it can be seen that the number of minima increases as the Miller indices increase. For non-trivial structures and for the important d spacing range near inter-atomic distances of 1 to 2Å the number of local minima is very large. Bragg reflections with large Miller indices that are heavily weighted are expected to contain many false minima; by applying an appropriate weighting scheme to the diffraction data the search for the global minimum can be facilitated. For powder data the default //​[[#​k092|weighting]]//​ scheme is:
  
 //​weighting//​ = If(Yobs <= 1, 1, 1 / Yobs); //​weighting//​ = If(Yobs <= 1, 1, 1 / Yobs);
Line 2015: Line 2015:
 Two penalty functions that have shown to facilitate the determination of structures are the anti-bumping (AB) penalty and the potential energy penalty U. The anti-bumping penalty is written as: Two penalty functions that have shown to facilitate the determination of structures are the anti-bumping (AB) penalty and the potential energy penalty U. The anti-bumping penalty is written as:
  
-| <​sub>​{{Technical_Reference%20V4-1_files:​image146.gif?​300x77}}</​sub>​ | (7‑14) |+| <​sub>​{{techref_files:​image146.gif?​300x77}}</​sub>​ | (7‑14) |
  
 where //​r//<​sub>​0</​sub>​ is a bond length distance, //​r//<​sub>​ij</​sub>​ the distance between atoms i and j including symmetry equivalent positions and the summation is over all atoms of type j. The //​ai_anti_bump//​ and //​box//​_//​interaction//​ keywords are used to implement the penalty of Eq. 7‑15 using the AI_Anti_Bump and Anti_Bump macros respectively. where //​r//<​sub>​0</​sub>​ is a bond length distance, //​r//<​sub>​ij</​sub>​ the distance between atoms i and j including symmetry equivalent positions and the summation is over all atoms of type j. The //​ai_anti_bump//​ and //​box//​_//​interaction//​ keywords are used to implement the penalty of Eq. 7‑15 using the AI_Anti_Bump and Anti_Bump macros respectively.
Line 2023: Line 2023:
 The //​grs_interaction//​ can be used to either calculate the Lennard-Jones or Born-Mayer potentials and it is suited to ionic atomic models (see example ALVO4-GRS-AUTO.INP). For a particular site i they comprise a Coulomb term C<​sub>​i</​sub>​ and a repulsive term R<​sub>​i</​sub>​ and is written as: The //​grs_interaction//​ can be used to either calculate the Lennard-Jones or Born-Mayer potentials and it is suited to ionic atomic models (see example ALVO4-GRS-AUTO.INP). For a particular site i they comprise a Coulomb term C<​sub>​i</​sub>​ and a repulsive term R<​sub>​i</​sub>​ and is written as:
  
-| <​sub>​{{Technical_Reference%20V4-1_files:​image148.gif?​86x20}}</​sub> ​where                                        ​<​sub>​{{Technical_Reference%20V4-1_files:​image150.gif?​127x27}}</​sub> ​   , i¹j <​sub>​{{Technical_Reference%20V4-1_files:​image152.gif?​100x27}}</​sub>, ​   for Lennard Jones and i¹j <​sub>​{{Technical_Reference%20V4-1_files:​image154.gif?​144x27}}</​sub>,​ for Born-Mayer and i¹j | (7‑15) |+| <​sub>​{{techref_files:​image148.gif?​86x20}}</​sub> ​where                                        ​<​sub>​{{techref_files:​image150.gif?​127x27}}</​sub>​    , i¹j <​sub>​{{techref_files:​image152.gif?​100x27}}</​sub>,​    ​for Lennard Jones and i¹j <​sub>​{{techref_files:​image154.gif?​144x27}}</​sub>,​ for Born-Mayer and i¹j | (7‑15) |
  
-where A = e<​sup>​2</​sup>​%%/​(%%4pe<​sub>​0</​sub>​) and e<​sub>​0</​sub>​ is the permittivity of free space, Q<​sub>​i</​sub>​ and Q<​sub>​j</​sub>​ are the ionic valences of atoms i and j, //​r//<​sub>​ij</​sub>​ is the  ​distance between atoms i and j and the summation is over all atoms to infinity. The repulsive constants B<​sub>​ij</​sub>,​ //n//, //​c//<​sub>​ij</​sub>​ and //d// are characteristic of the atomic species and their potential surrounds. The equation part of the //​grs_interaction//​ is typically used to describe the repulsive terms.+where A = e<​sup>​2</​sup>​%%/​(%%4pe<​sub>​0</​sub>​) and e<​sub>​0</​sub>​ is the permittivity of free space, Q<​sub>​i</​sub>​ and Q<​sub>​j</​sub>​ are the ionic valences of atoms i and j, //​r//<​sub>​ij</​sub>​ is the  distance between atoms i and j and the summation is over all atoms to infinity. The repulsive constants B<​sub>​ij</​sub>,​ //n//, //​c//<​sub>​ij</​sub>​ and //d// are characteristic of the atomic species and their potential surrounds. The equation part of the //​grs_interaction//​ is typically used to describe the repulsive terms.
  
-==== 7.11.2          ​Definition of bond length restraints ====+==== 7.11.2          ​Definition of bond length restraints ====
  
 The following example defines a bondlength restraint using the [[#k027|GRS series]] (see example ALVO4-GRS-AUTO.INP) between an Aluminum site and three Oxygen sites. Valence charges have been set to +3 and --2 for Aluminum and Oxygen, respectively. The expected bond length is 2 Angstroms between Oxygen sites and 1.5 Angstroms between Aluminum and Oxygen sites. The following example defines a bondlength restraint using the [[#k027|GRS series]] (see example ALVO4-GRS-AUTO.INP) between an Aluminum site and three Oxygen sites. Valence charges have been set to +3 and --2 for Aluminum and Oxygen, respectively. The expected bond length is 2 Angstroms between Oxygen sites and 1.5 Angstroms between Aluminum and Oxygen sites.
  
-site Al  ​x @ 0.7491  ​y @ 0.6981  ​z @ 0.4069  ​occ Al+3 1  ​beq 0.25+site Al  x @ 0.7491  ​y @ 0.6981  ​z @ 0.4069  ​occ Al+3 1  beq 0.25
  
-site O1  ​x @ 0.6350  ​y @ 0.4873  ​z @ 0.2544  occ  ​O-2 1  ​beq 1+site O1  x @ 0.6350  ​y @ 0.4873  ​z @ 0.2544  occ  O-2 1  beq 1
  
-site O2  ​x @ 0.2574  ​y @ 0.4325  ​z @ 0.4313  occ  ​O-2 1  ​beq 1+site O2  x @ 0.2574  ​y @ 0.4325  ​z @ 0.4313  occ  O-2 1  beq 1
  
-site O3  ​x @ 0.0450  ​y @ 0.6935  ​z @ 0.4271  occ  ​O-2 1  ​beq 1+site O3  x @ 0.0450  ​y @ 0.6935  ​z @ 0.4271  occ  O-2 1  beq 1
  
-Grs_Interaction(O*,​ O*, -2, -2, oo,  2.0, 5)  penalty = oo;+Grs_Interaction(O*,​ O*, -2, -2, oo,  2.0, 5)  penalty = oo;
  
-Grs_Interaction(Al,​ O*,  4, -2, alo, 1.5, 5)  penalty = alo;+Grs_Interaction(Al,​ O*,  4, -2, alo, 1.5, 5)  penalty = alo;
  
 The following example defines a bondlength restraint using the AI_Anti_Bump macro between a Potassium site and three Carbon sites. The expected bond length is 4 Angstroms between Potassium sites and 1.3 Angstroms between Carbon sites. The following example defines a bondlength restraint using the AI_Anti_Bump macro between a Potassium site and three Carbon sites. The expected bond length is 4 Angstroms between Potassium sites and 1.3 Angstroms between Carbon sites.
  
-site K   x @ 0.14305 ​ ​y ​@ 0.21812  ​z @ 0.12167  ​occ K 1  ​beq 1+site K   x @ 0.14305 ​ y @ 0.21812  ​z @ 0.12167  ​occ K 1  beq 1
  
-site C1  ​x @ 0.19191  ​y @ 0.40979  ​z @ 0.34583  ​occ C 1  ​beq 1+site C1  x @ 0.19191  ​y @ 0.40979  ​z @ 0.34583  ​occ C 1  beq 1
  
-site C2  ​x @ 0.31926  ​y @ 0.35428  ​z @ 0.32606  ​occ C 1  ​beq 1+site C2  x @ 0.31926  ​y @ 0.35428  ​z @ 0.32606  ​occ C 1  beq 1
  
-site C3  ​x @ 0.10935  ​y @ 0.30991  ​z @ 0.39733  ​occ C 1  ​beq 1+site C3  x @ 0.10935  ​y @ 0.30991  ​z @ 0.39733  ​occ C 1  beq 1
  
-AI_Anti_Bump(K , K , 4  ​, 1)+AI_Anti_Bump(K , K , 4  , 1)
  
 AI_Anti_Bump(C*,​ C*, 1.3, 1) AI_Anti_Bump(C*,​ C*, 1.3, 1)
  
 Note, there'​s no explicit definition of a penalty function as in the first example. The AI_Anti_Bump macro already includes a predefined penalty function. Note, there'​s no explicit definition of a penalty function as in the first example. The AI_Anti_Bump macro already includes a predefined penalty function.
- 

Personal Tools