PnR ICC
- Details
- Last Updated: Friday, 12 October 2018 04:38
- Published: Friday, 12 October 2018 04:38
- Hits: 2685
*******************************************
For Running Place n Route in Synopsys ICC (IC compiler):
---------------------------------------------------------------------
Synopsys ICC uses Milkyway(MW) ref lib, MW tech/RC Model files (TLU+) for physical data to do PnR.
Note, Cadence Encounter uses .lef file for physical data.
For logical data (during synthesis, timing, etc), synopsys and cadence both use .lib or .db timing files.
For running DC in topo mode, we need the MW lib, so that DC can estimate cell placement, and we need TLU+ files so that DC can calc wire delays from tech file instead of based on wire load models.
MW ref lib structure: It's a unix dir, and has binary files in it. It has 3 views for any ref lib (i.e stdcells)
-----
1. CEL view: It's in subdir CEL, and contains actual layout data for cell. It's not used by router. It's used for signoff extraction and signoff DRC/LVS checks. We don't really need this view as extraction file (eg *.spef) have only routing extraction info (R,C of nets), and no extraction from physical lib cell. Timing data in .lib file for all these std cells is used for delay calc to do timing analysis. It might be useful for DRC/LVS checks, assuming .lef file had some incorrect blkg, etc.
2. FRAM view: It's frame view (similar to lef file). It has pin, blkg, via, dimension, symmetry, etc which is used by PnR tool.
3. LM view (optional): It's logical model view and has all timing info. These are same as .lib/.db files used for timing during synthesis. These are specified using "target_library" and "link_library" as during synthesis.
:1, :2 etc denote the version number for that particular cell.
To create MW ref lib, we need to create one using MilkyWay tool:
/apps/synopsys/milkyway/2010.03/bin/AMD.64/Milkyway => brings up a GUI
In the gui, goto: cell_library->lef_in (appears on 2nd row). opens a read_lef box. Specify "MW lib name", where we want to store all stdcells (ie pml48MwRefLibs), specify tech lef file (/db/.../*tech_6layer.lef), and stdcell lef file (/db/.../*core_2pin.lef). click ok. This converts all cells in LEF file to their equiv FRAM view and adds them as subdir in MW lib dir name specified above.
---
Instead of working from gui, we can use cmd line i/f as follows (after opeing mw gui):
;# step 1 create a milkyway library from the tech file
cmCreateLib
setFormField "Create Library" "Library Name" "pml48_ref_libs/CORE" => since the dir to be created by MW is CORE, we need to have dir pml48_ref_libs already existing, or else mw will fail.
setFormField "Create Library" "Technology File Name" "../gs40.6lm.tf"
setFormField "Create Library" "Set Case Sensitive" "1"
formOK "Create Library"
;# step 2 read the lef into CEL view and model it into FRAM view
read_lef
setFormField "Read LEF" "Library Name" "pml48_ref_libs/CORE"
setFormField "Read LEF" "Cell LEF Files" "/db/pdk/1533e035/rev1/diglib/pml48/r2.4.0/vdio/lef/pml48_1533c035_core_2pin.lef"
setFormField "Read LEF" "Cell Options" "Make New Cell Version"
formOK "Read LEF"
Ex: /db/DAYSTAR/design1p0/HDL/Milkyway => In this dir, we create mw ref lib (both for regular and Chameleon cells). We also put .tf file and mapping file in here to generate tlu+ files. Steps for doing this are shown below in tlu+ section.
--------
create/open design MW lib
--------------
Once we are done creating mw ref lib, we create mw design lib using DC. We run DC in topo mode, and create our design MW lib. We need to create the design lib only once, then we need to only open it for any subsequent run.
create_mw_lib -technology <tech_file> -mw_reference_library <ref_lib> my_mw_design_lib => creates design my_mw_design_lib with top level dir my_mw_design_lib, and subdir lib, lib_1, lib_bck within it
open_mw_lib my_mw_design_lib => opens design my_mw_design_lib so that we can run cmds on it.
We can combine create and open MW in one cmd: create_mw_lib ... digtop -open
ex: create_mw_lib -technology /db/DAYSTAR/.../Milkyway/gs40.6lm.tf -mw_reference_library "/db/DAYSTAR/.../Milkyway/pml48MwRefLibs/CORE /db/DAYSTAR/.../Milkyway/pml48ChamMwRefLibs/CORE" -open my_mw_design_lib => done only once, when mw design lib doesn't exist. mw ref lib is the one created above using MilkyWay tool.
open_mw_lib my_mw_design_lib => just open mw lib for any subsequent run, as mw lib already exists.
#synthesize design, or do whatever we want to do on this mw design, then save MW design using this cmd: (save_mw_cel cmd doesn't work here, as it's supported only in ICC). MW db has netlist, synth constraints and optional fp, place, route data (if they exist).
write_milkyway -output digtop => this creates my_mw_design_lib/CEL dir, which has digtop:1 file. here :1 is the version number. If we use write_milkway cmd more than once, it creates additional design file and increments the version number. You must make sure you open the correct version in Milkyway; by default Milkyway opens the latest version. To avoid creating an additional version, use the -overwrite switch to overwrite the current version of the design file and save disk space.
-----
To load TLU+ file:
---
TLU+ is tech lookup table binary file, which is used by ICC to calc interconnect R,C values based on net geometry.
cmd: set_tlu_plus_files -max_tlu_plus <max_tluplus_file> -tech2itf_map <mapping_file> => sets pointer to tlu+ files assuming it's already been generated. tech2itf map file is needed to map names from .tf (technology file) file to .itf (interconnect technology format) file. We need the mapping file, as we used .tf file above in create_mw_lib cmd. That .tf file may have layer names different from .itf file. Since .itf files are used to generate .tlu+ files, names in .tlup files may be diff than ones in .tf file. so mapping file resolves this.
ex: set_tlu_plus_files \
-max_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.maxc_maxvia.wb2tcr.metalfill.spb.nlr.tlup \
-min_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.minc_minvia.wb2tcr.metalfill.spb.nlr.tlup \
-tech2itf /db/DAYSTAR/design1p0/HDL/Milkyway/mapping.file
To generate TLU+ files:
---
Generally, we have tech file (.tf) which is similar to lef tech file used in vdio. .tf file has all metal/via rules, complex drc rules, all layers, etc and is very elaborate. This is what we had at AMD.
For an ex, look in /db/DAYSTAR/design1p0/HDL/Milkyway/gs40.6lm.tf. It has following in it:
Technology { name="gs40" unitLengthName="micron" ... }
Tile "unit" { width=0.4250 height=3.4000 }
Layer "MET1" { layerNumber=10 minWidth=0.175 minSpacing=0.175 ... } => many more layers as poly, tox, hvt, bondwire, etc.
Layer "VIA2" { layerNumber=13 ... }
FringeCap 17 { number=17 layer1="MET6" layer2="MET1" minFringeCap=0.000010 maxFringeCap=0.000010 } =>b/w any 2 layers
DesignRule { layer1="VIA1" layer2="VIA2" minSpacing=0 }
ContactCode "VIA23" { contactCodeNumber=2 cutLayer="VIA2" lowerLayer="MET2" upperLayer="MET3" ... }
and many more layers ...
Generally vendors provide only .itf (interconnect technology format). These .itf contain desc of process, thickness and phy attr of conductor and dielectric layers, via layers, etc. These are used to extract RC values for interconnects. These .itf are used to generate TLU+ files to be used by ICC by this cmd:
grdgenxo -itf2TLUPlus -i <abc.itf> -o <abc.tluplus> => -itf2TLUPlus option generates tlu+ file instead of nxtgrd file (nxtgrd file are used in star-rcxt tool. this is needed when running ICC in signoff mode)
ex: grdgenxo -itf2TLUPlus -i .../gs40.6lm.maxc_maxvia.wb2tcr.metalfill.spb.nlr.itf.eval -f /testcase/di3/techfiles/sp_di1/sr60/TLUPlus/6lmalcap/itfs/c021.format -o gs40.6lm.maxc_maxvia.wb2tcr.metalfill.spb.nlr.itf.eval.tlup
These tlu+ files have layer names same as those in .itf files. Since these names may not match the names in .tf files, we use mapping file that maps .tf layer/via names to .itf layer/via names. It's called as .map file or mapping.file or any other name. For an ex, look in /db/DAYSTAR/design1p0/HDL/Milkyway/mapping.file => It has all "capital letter" layer names mapped to "small letter" layer names. It also removes all layers except active, poly, met and via layers, as they are not needed.
------------------------
DC synthesis topo mode: So, the complete flow for DC synthesis in topo mode looks like this:
----
In .synopsys_dc.setup file in Synthesis dir, set search path to wherever you have .db files.
Then run dc_shell in topo mode: dc_shell-t -2010.03-SP5 -topo -f tcl/top.tcl | tee logs/top.log
#In dc_shell, run initial setup/analyze the normal way
source tcl/setup.tcl
source tcl/analyze.tcl
elaborate $DIG_TOP_LEVEL
current_design $DIG_TOP_LEVEL
link
set_operating_conditions -max W_125_1.35 -library {PML48_W_125_1.35_COREL.db PML48_W_125_1.35_CTSL.db} => points to lib in search path
#set auto_wire_load_selection true => commented as no wlm (as we use tlu+ and net geometry to calc res/cap values)
#set_wire_load_mode enclosed => commented as no wlm
#### start of special cmds for running in topo mode ####
#open/create mw lib
set lib_exist [file exists my_mw_design_lib]
if {$lib_exist != 1} {
create_mw_lib -technology /db/DAYSTAR/design1p0/HDL/Milkyway/gs40.6lm.tf \
-mw_reference_library "/db/DAYSTAR/design1p0/HDL/Milkyway/pml48MwRefLibs/CORE /db/DAYSTAR/design1p0/HDL/Milkyway/pml48ChamMwRefLibs/CORE" -open my_mw_design_lib
}
open_mw_lib my_mw_design_lib
#Enable Cell area and footprint checks (so that area of cell and footprint of cell are consistent) between logical(in link_library) and physical library(in MW db)
set_check_library_options -cell_area -cell_footprint
check_library
#set tlu+ file instead of WLM
set_tlu_plus_files \
-max_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.maxc_maxvia.wb2tcr.metalfill.spb.nlr.tlup \
-min_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.minc_minvia.wb2tcr.metalfill.spb.nlr.tlup \
-tech2itf /db/DAYSTAR/design1p0/HDL/Milkyway/mapping.file
check_tlu_plus_files => performs sanity checks on TLU+ files to ensure correct tlu+ and map file
#### end of special cmds for running in topo mode ####
#start with normal flow
set_driving_cell -lib_cell IV110 [all_inputs]
set_load 2.5 [all_outputs]
source tcl/dont_use.tcl
source tcl/dont_touch.tcl
...
compile_ultra -scan ...
...
#save final design from mem to MW lib (MW stores physical info of design), and name it as digtop
set mw_design_library my_mw_design_lib => to make sure design lib is set correctly
write_milkyway -output digtop -overwrite => Overwrites existing version of the design under the CEL view.
exit
-----------------------------------
Running ICC:
-----------
just like in DC, cp .synopsys_dc.setup file from the synthesis dir to the dir, where you are running ICC. It has all the same settings as DC, i.e It sources other tcl files from admin area. sets search_path to /db/../synopsys/bin, target_library and link library to PML*_CTS.db, and other parameters for snps ICC.
run ICC:
icc_shell -2011.09-SP4 -f tcl/top.tcl | tee logs/my.log => starts up icc
icc_shell> start_gui => to start gui from icc_shell. 2 gui may open: One is the ICC main window from where we can enter cmd on icc_shell built within this window. The other is ICC layout window, which opens up whenever we open/import design. From this window, we control and view PnR.
We can run ICC in 2 modes. Choose from File->Task in ICC layout window.
1. Design planning: Full chip planning/feasability/partitioning is done. Visibility is turned OFF for cells and cell contents. Top panel shows fp, preroute, place, partition, clk, route, pin assgn, timing, etc. Once we are satisfied, we partition top level design into blocks and do block level impl as shown next.
2. Block implementation: actual impl at block level is done. Visibility is turned ON for cells and cell contents. Top panel shows fp, preroute, place, clk, route, signoff, finish, eco, verification, power, rail, timing, etc.
#reset_design => removes all attr and constraints (dont_touch, size_only, ...)
top.tcl:
-------
#source some other files (same as in DC) => In this file set some variables, i.e "set RTL_DIR /db/dir" "set DIG_TOP_LEVEL digtop" or any other settings
#create is needed only for the first time design is created in ICC. From next time, we just need to open the design.
create_mw_lib -technology /db/DAYSTAR/design1p0/HDL/Milkyway/gs40.6lm.tf \
-mw_reference_library "/db/DAYSTAR/design1p0/HDL/Milkyway/pml48MwRefLibs/CORE /db/DAYSTAR/design1p0/HDL/Milkyway/pml48ChamMwRefLibs/CORE" -open my_mw_design_lib
open_mw_lib my_mw_design_lib => to open mw lib
#ICC can also directly open a mw db written by DC (as in DC topo), so no need to create/open new mw or import any netlist.
#open_mw_lib ../../Synthesis/digtop/my_mw_design_lib
set_tlu_plus_files \
-max_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.maxc_maxvia.wb2tcr.metalfill.spb.nlr.tlup \
-min_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.minc_minvia.wb2tcr.metalfill.spb.nlr.tlup \
-tech2itf /db/DAYSTAR/design1p0/HDL/Milkyway/mapping.file
check_tlu_plus_files
set mw_logic0_net "VSS"
set mw_logic1_net "VDD"
#read in verilog, vhdl or ddc format
#read_verilog -netlist ../Synthesis/netlist/digtop.v
#current_design $DIG_TOP_LEVEL
#uniquify
#link
#save_mw_cel -as $DIG_TOP_LEVEL
#all of the above can be replaced by this one liner
import_designs ../Synthesis/digtop/netlist/digtop.v -format verilog -top $DIG_TOP_LEVEL
#If we imported mw db from DC, then instead of importing netlist, we can open mw cel directly
#open_mw_cel $DIG_TOP_LEVEL => opens mw cel digtop written by DC. No need to specify path of mw lib, as path is set, whenever we open mw lib.
IO pad/pin placement:
--------------------
set_pad_physical_constraints => Before creating fp, we should create placement and spacing settings for I/O pads. These IO pads refer to analog buf cells that connect I/O pins to internal logic.
set_pin_physical_constraints => To constrain Pins. ICC checks to make sure constraints for both pads and pins are consistent.
set_fp_pin_constraints => sets global constraints for a block. If a conflict arises between the individual pin constraints and the global pin constraints, the individual pin constraints have higher priority.
To save pin/pad constraints:
write_pin_pad_physical_constraints <const_file> => saves all const applied using pad and pin cont cmd above
To read pin/pad constraints:
read_pin_pad_physical_constraints <const_file> => to read all pin/pad const
In our case, these pads are at top level, so we don't need pad const in digtop. We need pin const, only when the actual pin io file is not available. Once we get real io file with pin placement, we don't need to run this section. We just read in the pin def file.
create_floorplan: create a floorplan similar to what we do in VDI (for older ICC versions, use initialize_floorplan as create_floorplan is only supported from 2011 onwards)
------------------
create_floorplan => creates block shape, size and placement rows (target util, aspect ratio, core size, bdry to core spacing, etc.and std cell. placement rows are visible on zooming. places constrained pads first. Any unconstrained pads are placed next, using any available pad location. Then pins are placed. If pin location not specified, then pins are placed randomly and evenly dist along 4 sides of block.
#-control_type aspect_ratio | width_and_height | row_number | boundary> => The default control type is the aspect_ratio which indicates that the core area of the floorplan in the current Milkyway CEL is determined by the ratio of the height divided by the width. The width_and_height control type indicates that the core area is determined by the exact width and height.
#-core_width <width> -core_height <height>=> Specifies the width and height of the core area in user units. This option is valid only if you specify the -control_type width_and_height option.
#-left_io2core <x1> -right_io2core <x2> -bottom_io2core <y1> -top_io2core <y2> => Specifies the distance between the left/right/bot/top side of the core area and the right/left/top/bot side of the closest terminal or pad.
create_floorplan => creates fp with default options, to fit in all cells.
create_floorplan -control_type width_and_height -core_width 180 -core_height 200 -left_io2core 8.5 -right_io2core 8.5 -bottom_io2core 8.5 -top_io2core 8.5 => specify fp size and spacing
#initialize_rectilinear_block => only for rectilinear blocks (L,T,U or cross-shaped). In this, pins are not touched at all.
##defining routing tracks. create_track to create tracks. report_track shows all tracks (usr or def). Generally, we'll see all metal layers in both X and Y dirn.
#write_def or write_floorplan to save fp into def or mw.
write_def -output fp_for_DC_topo.def => writed fp def, so that we can use this fp info in DC topo, to get better synthesized netlist.
#read_def or read_floorplan to import in a fp def file, which has some/all pwr routes, i/o pins and chip dimensions.
read_def chip.def => it adds the physical data in the DEF file to the existing physical data in the design. To replace rather than add to existing data, use the -no_incremental option
#pg connections
derive_pg_connection -power_net VDD -ground_net VSS => creates logical connection b/w pg nets in design to pg pins on stdcells
check_physical_constraints => check that logical lib (.db) and physical lib (mw) match. we see warnings about missing pg nets in fp
report_cell_physical -connection => reports all pin connections for all stdcells
Virtual flat placement: This is for design planning/feasibility purpose only.
----------------------
help you decide on the locations, sizes, and shapes of the top-level physical blocks. This placement is “virtual” because it temporarily considers the design to be entirely flat, without hierarchy. After you decide on the shapes and locations of the physical blocks, you restore the design hierarchy and proceed with the block-by-block physical design flow.
set_fp_placement_strategy => sets parameters that control the create_fp_placement and legalize_fp_placement commands. These settings are not applicable to other placement commands or other parts of the flow.
create_fp_placement => performs a virtual flat placement of standard cells and hard macros. It provides you with an initial placement for creating a floorplan to determine the relative locations and shapes of the toplevel physical blocks
power planning: optional, only needed if we need to create straps/rings.
-------------
#set_fp_rail_constraints => defines PNS (Power network synthesis) constraints
set_fp_rail_constraints -add_layer -layer MET2 -direction vertical -max_strap 20 -min_strap 10 -min_width 0.4 -spacing minimum => -add_layer says to add 10-20 power straps on MET2 in vert dirn, with min_width of 0.4 units. -spacing says that spacing b/w pwr and gnd nets can be min spacing. Sometimes we want to route signals in b/w these pwr and gnd nets, so we may choose "-spacing distance" to specifically specify the distance.
set_fp_rail_constraints -add_layer -layer MET3 -direction horizontal -max_strap 20 -min_strap 10 -min_width 0.4 -spacing minimum => this adds horz straps in MET3
#set_fp_block_ring_constraints => defines the constraints for the power and ground rings that are created around plan groups and macros, when pg n/w is synthesized. This may not be needed for our purpose, since we don't have macros, around which we want to create rings
set_fp_block_ring_constraints -add -horizontal_layer METAL5 -vertical_layer METAL6 -horizontal_width 3 \
-vertical_width 3 -horizontal_offset 0.600 -vertical_offset 0.600 -block_type master -nets {VDD VSS} -block { RAM210 }
#synthesize_fp_rail command => synthesizes the power network based on the set_fp_rail_constraints cmd.
synthesize_fp_rail -power_budget 800 -voltage_supply 1.32 -output_directory powerplan.dir -nets {VDD VSS} -synthesize_power_plan => synthesizes fp rail
commit_fp_rail => commit the power plan to convert the virtual power straps and rings to actual power wires, ground wires, and vias.
create views:
-----------
#specifying min/max timing lib => "link_library" or "target_library" in .synopsys_dc.setup has the max lib only. We are not allowed to specify min lib there. If more than 1 .db files are specified in link/taget library, tool just looks through these .db files, and stops the first time, it finds the required cell. That's why we specify just the max lib files for both CORE and CTS cells.
#So, to specify min lib for min delay analysis, we need to use the "set_min_library" cmd => it associates a min lib with max lib, i.e to compute min dly, tool first consults the library cell from the max library. If a library cell exists with the same name, the same pins, and the same timing arcs in the min library, the timing information from the min library is used. If the tool cannot find a matching cell in the min library, the max library cell is used.
set_min_library PML48_W_125_1.35_COREL.db -min_version PML48_S_-40_1.65_COREL.db => for core cells
set_min_library PML48_W_125_1.35_CTSL.db -min_version PML48_S_-40_1.65_CTSL.db => for cts cells
list_libs => shows all min/max lib. m=min, M=max. Make sure all paths, etc are correctly reported.
###setting mmmc flow: ICC uses multi scenario method to analyze and optimize these designs across all design corners and modes of operation.
A scenario is a combination of modal constraints (test mode or standby mode) and corner specifications (operating conditions of various PVT). create_sceanario defines one such mode/corner. In multicorner-multimode designs, DC/ICC uses a scenario or a set of scenarios as the unit for analysis and optimization. The current scenario is the focus scenario; when you set modal constraints or corner specifications, these typically apply to the current scenario. The active scenarios are the set of scenarios used for timing analysis and optimization.
Specify the TLUPlus libraries, operating conditions, and constraints that apply to the scenario. In general, when you specify these items, they apply to the current scenario.
###create scenario func_max, with max dly lib, and max rc tlu+.
create_scenario func_max => creates scenario, makes that scenario current and active
current_scenario => display the current scenario
current_scenario func_max => current scenario is set to func_max
#set_operating_conditions => defines op cond under which to time or optimize the design
set_operating_conditions W_125_1.35 -library {PML48_W_125_1.35_COREL.db PML48_W_125_1.35_CTSL.db}
#create_operating_conditions -name typ_lib_set -lib {PML48_N_25_1.5_COREL.db PML48_N_25_1.5_CTSL.db} -proc 0 -temp 25 -volt 1.8 => creates new op cond which may not be present. NOT needed for our purpose
#tlu+ set to max rc for both max/min corner
set_tlu_plus_files \
-max_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.maxc_maxvia.wb2tcr.metalfill.spb.nlr.tlup \
-min_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.maxc_maxvia.wb2tcr.metalfill.spb.nlr.tlup \
-tech2itf /db/DAYSTAR/design1p0/HDL/Milkyway/mapping.file
check_tlu_plus_files
#read sdc file that has constraints from DC. this replaces all lines in DC starting from "set_op_cond" to dont_use/touch, i/o dly, max_transition, create_clock, false_path/multicycle_path, disable_timing etc.
read_sdc
read_sdc ../../Synthesis/digtop/sdc/constraints.sdc
#check
check_timing => all paths should be constrained. If there are unconstrained paths, these should all be false paths as defined in false path file. run report_timing_requirements cmd to verify that.
###create scenario func_min, with min dly lib, and min rc tlu+.
create_scenario func_min
current_scenario => displays the current scenario
current_scenario func_min => current scenario is set to func_min
set_operating_conditions S_-40_1.65 -library {PML48_S_-40_1.65_COREL.db PML48_S_-40_1.65_CTSL.db}
set_tlu_plus_files \
-max_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.minc_minvia.wb2tcr.metalfill.spb.nlr.tlup \
-min_tluplus /db/DAYSTAR/design1p0/HDL/Milkyway/tlu+/gs40.6lm.minc_minvia.wb2tcr.metalfill.spb.nlr.tlup \
-tech2itf /db/DAYSTAR/design1p0/HDL/Milkyway/mapping.file
check_tlu_plus_files
read_sdc ../../Synthesis/digtop/sdc/constraints.sdc
#reporting scenarios
all_scenarios => displays all the defined scenarios
report_scenarios => reports all the defined scenarios
set_active_scenarios {s1 s2} => sets s1,s2 to active. -all makes all scenarios active
all_active_scenarios => display the currently active scenarios
remove_scenario => remove the specified scenarios from memory (-all removes all scenarios)
check_scenarios => check all scenarios for any issues
place:
-----
#insert_port_protection_diodes => add diodes to the specified ports to your netlist to prevent antenna violations. should be done after fp and before place. report_port_protection_diodes reports the port protection diodes that are inserted in your design.
#pg connections
preroute_standard_cells -fill_empty_rows => Generates physical PG rails for standard logic cells. It connects all pwr/gnd rails in stdcells together, and then connects them to straps and pwr rings. "-fill_empty_rows" switch fills the CORE area or specified area with empty PG rails where cells can be subsequently placed, so that the entire region has PG rails.
#set active scenario to run setup opt for func_max and hold opt for func_min
set_scenario_options -setup true -hold false -scenarios func_max => Sets the scenario options for func_max to do opt on setup but not on hold (by default, it does opt on both setup and hold timing)
set_scenario_options -setup false -hold true -scenarios func_min => Sets the scenario options for func_min to do opt on hold but not on setup
set_active_scenarios {func_max func_min} => set both these scenario active. NOTE: .lib doesn't have process set to 1 for min lib (process=-3), so check_scenarios will warn. place, route, etc won't run. So, set active scenario to "func_max" only => set_active_scenarios {func_max}
# Add set_propagated_clock
set_propagated_clock [all_clocks]
###checks to be done prior to running place, so that any issues can be identified
check_design => check_design -summary cmd automatically runs on every design that is compiled. However, you can use the check_design cmd explicitly to see warning messages. Potential problems detected by this cmd include unloaded input ports or undriven output ports, nets without loads or drivers or with multiple drivers, cells or designs without inputs or outputs, mismatched pin counts between an instance and its ref, tristate buses with non-tristate drivers, and so forth.
check_timing => checks timing and issues warnings. This cmd without any options performs the checks defined by the timing_check_defaults variable. Redefine this variable to change the value.
check_physical_design -stage pre_place_opt => does phy design checks on design data for place. use "-stage pre_clock_opt" for pre cts, and "-stage pre_route_opt" for pre route.
# Perform timing analysis before placement (only run setup). When we do report_timing for setup, it reports setup for active scenarios. If one of those active scenarios doesn't have "setup=true", then nothing is reported. So, we provide scenario name as "func_max" during report timing (as "func_min is only valid for hold)
set rptfilename [format "%s/%s" timingReports ${DIG_TOP_LEVEL}_pre_place.rpt]
redirect $rptfilename {echo "digtop pre place setup run : [date]"}
redirect -append $rptfilename {report_timing -delay_type max -path full_clock_expanded -max_paths 100 -scenarios {func_max}}
#### place
# Add I/O Buffers
set_isolate_ports -driver BU120 -force [all_inputs] => force BU120 cell on all i/p ports
set_isolate_ports -driver BU120 -force [all_outputs] => force BU120 cell on all o/p ports
report_isolate_ports => reports all i/o ports and their isolation cells.
#place_opt -area_recovery => Performs coarse placement, high-fanout net synthesis, physical opt, and legalization. Doesn't touch the clk n/w.
#-area_recovery => min area target
#-cts => enables quick cts, opt and route within place_opt, when designs are large. Should always run clock_opt eventually.
#-spg => uses Design Compiler's Physical Guide information to guide optimization. We can use either mw, .ddc or def file from DC, all of which have physical info. However, the guidance feature is only available in DC gui, so -spg will work only if DC mw or ddc has been gen using this feature. Also fp def from ICC should be imported into DC, so that DC can better synthesize the netlist based on fp. Just using DC topo mode doesn't mean that placement info can be read into ICC.
place_opt -area_recovery => may need to be run multiple times with diff options to fix violations.
#reports/checks
report_constraint
report_design
report_placement_utilization
create_qor_snapshot -name post_place => stores design qor in set of report files in dir "snapshot"
report_qor_snapshot => used to retrieve the qor rpt.
# Perform timing analysis after placement
set rptfilename [format "%s/%s" timingReports ${DIG_TOP_LEVEL}_post_place.rpt]
redirect $rptfilename {echo "digtop post place setup run : [date]"}
redirect -append $rptfilename {report_timing -delay max -path full_clock -max_paths 100}
# Add Spares. for scan, add flops with scan, otherwise non-scan flops.
insert_spare_cells -lib_cell {IV120L NA210L} -cell_name spares -num_instances 10 -tie => inserts spare cells group specified (IV120L,NA210L) 10 times spread uniformly across design with input pins tied to 0.
all_spare_cells => list all spare cells in design
#check and save
check_design
save_mw_cel -as post_place => we'll see a post_place:1 file in my_mw_design_lib/CEL dir.
write_def -output digtop_post_place.def
write_verilog ./netlist/digtop_post_place.v
#Post-placement optimization
psynopt => performs timing optimization and design rule fixing, based on the max cap and max transition settings while keeping the clock networks untouched. It can also perform power optimizations. It can remove dangling cells (to prevent that, use "set_dont_touch" cmd to apply dont_touch attr on required cells)
CTS
----
Prereq for CTS are:
1. check_legality -verbose => to verify that the placement is legal
2. pwr/gnd nets should be prerouted
3. High-fanout nets, such as scan enables, should already be synthesized with buffers.
4. By default, CTS cannot use buf/inv that have the dont_use attribute to build the clock tree. To use these cells during CTS, you can either remove the dont_use attribute by using the remove_attribute command or you can override the dont_use attribute by specifying the cell as a clock tree reference by using the set_clock_tree_references cmd.
CTS traces thru all comb cells (incl clk gating cells). However, it doesn't trace thru seq arcs or 3 state enable arcs
.
check_physical_design -for_cts => checks if design is placed, clk defined and clk root are not hier pins.
check_clock_tree => checks and warns if clk src pin is hier, incorrect gen clk, clk tree has no sync pins, and if there are multiple clks per reg.
#set_clock_tree_options => sets clk tree options
#-clock_trees clock_source
#-target_early_delay insertion_delay => by default, min insertion delay is set to 0.
#-target_skew skew
#-max_capacitance capacitance => by default, max cap is set to 0.6pf.(if not specified for design or not specified using switch here)
#-max_transition transition_time => By default, the max transition time is 0.5 ns
set_clock_tree_options -clock_trees sclk_in -target_early_delay 0 -target_skew 0.5 -max_transition 0.6 => set skew and tran
4 kinds of pins that are used in CTS. A pin may belong to more than 1 of these:
1. STOP pins: pins that are endpoints of clk tree. eg. clk pins of cells, clk pins of IP.
2. NONSTOP pins: pins that would normally be stop pins, but are not. The clock pins ofsequential cells driving generated clocks are implicit NONSTOP (not STOP) pins, as clk tree balancing needs to be done thru these pins. NOTE: this default behaviour is different than EDI, where ThroughPin has to be used in .ctstch file to force CTS thru the generated clks.
3. FLOAT pins: similar to STOP pins, but have special insertion delay requirements (have extra delay on clk pins). ICC adds the float pin delay (positive or negative) to the calculated insertion delay up to this pin. Usually, IP/Macro pins are defined as FLOAT pins so that we can add appr delay to the pin, equal to dly in the clk tree inside the IP/Macro.
4. EXCLUDE pins: clock tree endpoints that are excluded from CTS. implicit exclude pins are clk pins going to o/p ports or pins on IP/macro that are not defined as clk pins(i.e they are treated as data pins. We have to explicitly set these pins to stop_pins), or data pins of seq cells. During CTS, ICC isolates exclude pins (both implicit and explicit) from the clock tree by inserting a guide buffer before the pin. Beyond the exclude pin, ICC never performs skew or insertion delay optimization, but does perform design rule fixing. NOTE: In EDI, we use ExcludedPin in .ctstch file to specify exclude pins
#set_clock_tree_exceptions => sets clk tree exceptions on the pins above. We don't need this.
#-clocks clk_names => clks must be ones defined by "create_clock" and NOT by "create_generated_clock".
#-stop_pins stop_pin_collection
#-non_stop_pins non_stop_pin_collection
#-exclude_pins exclude_pin_collection
#-float_pins float_pin_collection => additional options for max/min_delay_rise/fall should be used.
#set_clock_tree_references => Specifies the buffers, inverters, and clock gates to be used in CTS.
#-clock_trees clock_names => by default, it applies to all clks
#-references ref_cells => Specifies the list of buffers, inverters, and clock gates for CTS.
set_clock_tree_references -references "CTB02B CTB15B CTB201B CTB20B CTB25B CTB30B CTB35B CTB40B CTB45B CTB50B CTB55B CTB60B CTB65B CTB70B" => In EDI, equiv cmd was "Buffer" used in .ctstch file
clock_opt => Performs clock tree synthesis, routing of clock nets, extraction, optimization, and hold-time violation fixing. Uses default wires (default routing rules) to route clk trees. We can define non default routing rules using "define_routing_rule" cmd, and use these routing rules with "set_clock_tree_options -routing_rule" cmd. NDR rules define what wires, routing layers, clk sheilding to use. Shielding is done using "create_zrt_shield" cmd, after doing clock_opt.
Prior to the clock_opt command, use the set_clock_tree_options command to control the compile_clock_tree command. Briefly, it runs the following cmds under the hood:
o Runs the compile_clock_tree cmd => run multiple times using diff options
o Runs the optimize_clock_tree cmd
o Runs the set_propagated_clock command for all clocks from the root pin, but keeps the clock object as ideal, Performs interclock delay balancing, if enabled (using set_inter_clock_delay_options command), Sets the clock buffers as fixed, Updates latency on clock objects with their insertion delays obtained after the compile_clock_tree, if enabled (using set_latency_adjustment_options command)
0 Runs "route_group -all_clock_nets" cmd to route clk nets. "-no_clock_route" switch Disables routing of clock nets.
#running clock_opt in these steps is more flexible than clock_opt alone.
#clock_opt -only_cts -no_clock_route => performs CTS with opt only with no routing of nets
#clock_opt -only_psyn -no_clock_route => performs opt only with no routing of nets. This is used n a user-customized CTS flow where CTS is performed outside of the clock_opt command
#route_group -all_clock_nets
clock_opt
## Post CTS optimization
clock_opt -only_psyn
route:
------
zroute is default router for ICC. Even though it's grid router, it allows nets to go off grid to connect to pins. Prereq for running zroute are: pwr/gnd nets must be routed and CTS should have been run.
We can run prerouter to preroute signal nets, before running zroute. zroute doesn't reroute these nets, but only fixes DRC.
check_routeability => to verify that design is ready for routing
#define routing guides
#create_route_guide -coordinate {0.0 0.0 100.0 100.0} -no_signal_layers {MET3 MET4 MET5 MET6}
#set_route_zrt_common_options -min_layer_mode hard -max_layer_mode hard => min/max layers are set to hard constraints, instead of soft constraints.
set_ignored_layers -min_routing_layer MET1 -max_routing_layer MET3 => max/min routing layers, by default these are hard constraints.
#define_routing_rule => to define nondefault routing rules (width,spacing,etc), both for routing and for shielding. These rules are assigned diff names, and then they are applied either on clk nets using "set_clock_tree_options" during CTS, or on signal nets and clk nets after CTS using "set_net_routing_rule".
#displays current settingsfor all routing ptions
set_route_zrt_common_options -verbose_level 1
report_route_zrt_common_options
#3 methods to route signal nets:
1. route_zrt_global => performs global routing. route_zrt_track => to perform track assignment. route_zrt_detail => to perform detail routing. Useful in cases where we want to customize routing flow
2. route_zrt_auto => performs all tasks in method 1 above. Runs fast so useful for analyzing routing congestion, etc.
3. route_opt => performs everything in method 2 above + postroute opt. To skip opt, add "-initial_route_only". Used for final routing.
The 3 substeps of routing are as follows:
-----------------
1. global routing:
----------
The global router divides design into global routing cells (GRC). By default, the width of a GRC is the same as the height of a standard cell and is aligned with the standard cell rows.
For each global routing cell, the routing capacity is calculated according to the blockages,
pins, and routing tracks inside the cell. Although the nets are not assigned to the actual wire
tracks during global routing, the number of nets assigned to each global routing cell is noted.
The tool calculates the demand for wire tracks in each global routing cell and reports the
overflows, which are the number of wire tracks that are still needed after the tool assigns
nets to the available wire tracks in a global routing cell.
Global routing is done in two phases:
phase 0 = initial routing phase, in which the tool routes the unconnected nets and calculates the overflow for each global routing cell
phase 1 = The rerouting phases, in which the tool tries to reduce congestion by ripping up and rerouting nets around global routing cells with overflows. It does it several times (-effort minimum causes this phase to run once while -effort high causes it to run 4 times)
routing report:
phase3. Both Dirs: Overflow = 453 Max = 4 GRCs = 449 (0.02%) => there are 453 wires in design that don't have corresponding track available. The Max value corresponds to the highest number of overutilized wires in a single GRC. The GRCs value is the total number of overcongested global routing cells in the design
2. track assignment:
------
The main task of track assignment is to assign routing tracks for each global route. During track assignment, Zroute performs the following tasks:
• Assigns tracks in horizontal partitions.
• Assigns tracks in vertical partitions.
• Reroutes overlapping wires.
After track assignment finishes, all nets are routed but not very carefully. There are many violations, particularly where the routing connects to pins. Detail routing works to correct those violations.
routing report: reports a summary of the wire length and via count.
3. detail routing:
------
The detail router uses the general pathways suggested by global routing and track assignment to route the nets, and then it divides the design into partitions and looks for DRC violations in each partition. When the detail router finds a violation, it rips up the wire and reroutes it to fix the violation. During detail routing, Zroute concurrently addresses routing design rules and antenna rules and optimizes via count and wire length.
Zroute uses the single uniform partition for the first iteration to generate all DRC violations for the chip at the same time. At the beginning of each subsequent iteration, the router checks the distribution of the DRC violations. If the DRC violations are evenly distributed, the detail router uses a uniform partition. If the DRC violations are located in some local areas, the detail router uses nonuniform partitions. It Performs iterations until all of the violations have been fixed, maximum number of iterations has been reached or It cannot fix any of the remaining violations.
routing report: reports DRC violations summary at the end of each iteration. a summary of the wire length and via count.
route_opt => does all 3 stages of routing + opt.
report_design_physical -verbose => to view PnR summary rpt.
verify_zrt_route => checks for routing DRC violations, unconnected nets, antenna rule violations, and voltage area violations on all nets in the design, except those marked as user nets or frozen nets.
extract_rc -coupling_cap => explicitly performs postroute RC extraction, with coupling cap. RC estimation is already done, when route_opt or any report_* cmd is run.
#report setup/hold timing, write def/verilog
#post route opt if needed
#route_opt -incremental
#route_opt -skip_initial_route -xtalk_reduction
STA:
----
#set all scenarios active
set_scenario_options -setup true -hold true -scenarios {func_max func_min scan_max scan_min}
set_active_scenarios {func_max func_min scan_max scan_min}
#report timing
#opt if needed
#for fixing DRV
set routeopt_drc_over_timing true
route_opt -effort high -incremental -only_design_rule
#for fixing hold
route_opt -only_hold_time
#for si
set_si_options -delta_delay true -route_xtalk_prevention true -route_xtalk_prevention_threshold 0.35
route_opt -skip_initial_route -xtalk_reduction
#focal_opt
Signoff:
--------
from routed db, we can do signoff driven design closure by 2 ways:
1. signoff_opt => auto flow. runs analysis and optimization.
2. run_signoff => manual flow. runs analysis
During analysis in signoff, StarRC is used to perform a complete parasitic extractionand stores the results as an Synopsys Binary Parasitic Format (SBPF) file or SPEF file. For timing, PT is run, and the timing info is passed back to ICC. When not in signoff, ICC internal engine is used for both extraction and timing.
set_primetime_options -exec_dir /apps/synopsys/pt/2011.12/amd64/syn/bin
set_starrcxt_options -exec_dir /apps/synopsys/star-rcxt/2011.12/amd64_starrc/bin
report_primetime_options
report_starrcxt_options
#scenarios
set_starrcxt_options -max_nxtgrd_file $max_grd_file -map_file /db/DAYSTAR/design1p0/HDL/Milkyway/mapping.file
#NOTE: still get errors, when running signoff_opt =>
#Information: Use StarRCXT path /apps/synopsys/star-rcxt/2011.12/amd64_starrc/bin. (PSYN-188)
#Error: The star_path option can only be used in conjunction with the star_max_nxtgrd_file option(s). (UIO-18)
#Error: The star_path option can only be used in conjunction with the star_map_file option(s). (UIO-18)
signoff_opt => run signoff optimization by ICC, based on results from signoff tool: starRC and PT.
#report_timing
#report_constraint -all_violators
save_mw_cel -as signoff
#if inc opt needed (to fix drv, hold time, si => use additional options with signoff_opt)
signoff_opt -only_psyn
#check_signoff_correlation => check the correlation between ICC and PT, and between ICC and StarRC.
Filler:
-------
# we insert filler cells before running signoff, so as to catch any issues
#insert_stdcell_filler => Fills empty spaces in standard cell rows with filler cells. the tool adds the filler cells in the order that you specify, so specify them from the largest to smallest. Run after placement.
#-cell_without_metal <lib_cells> or -cell_with_metal <lib_cells> => specify filler cells that don't contain metal or those that contain metal. Tool doesn't check for DRC if "cell_without_metal" is used.
insert_stdcell_filler -cell_without_metal {FILLER_DECAP_P12L FILLER_DECAP_P6L}
final_checks:
------------
#need to find checks for drc, antenna, connectivity
signoff_drc => performs signoff design rule checking. IC validator, or Hercules license reqd.
export_final:
------------
write_parasitics -format SPEF -output final_files/digtop_starrc.spef => writes spef file. If there are min and max operating conditions, parasitics for both conditions are written. In mmmc flow, the tool uses the name of the tluplus file and the temperature associated with the corner, along with the file name you specified, to derive the file name of the parasitic file (<tluplus_file_name>_<temperature>[_<user_scaling>].<output_file_name>).
write_def -version 5.5 -output final_files/digtop_final_route.def => writes def version 5.5
write_verilog final_files/digtop_final_route.v
--------------------------